30

I have an Excel spreadsheet that I need to import into SQL Server on a daily basis. The spreadsheet will contain around 250,000 rows across around 50 columns. I have tested both using openpyxl and xlrd using nearly identical code.

Here's the code I'm using (minus debugging statements):

import xlrd
import openpyxl

def UseXlrd(file_name):
    workbook = xlrd.open_workbook(file_name, on_demand=True)
    worksheet = workbook.sheet_by_index(0)
    first_row = []
    for col in range(worksheet.ncols):
        first_row.append(worksheet.cell_value(0,col))
    data = []
    for row in range(1, worksheet.nrows):
        record = {}
        for col in range(worksheet.ncols):
            if isinstance(worksheet.cell_value(row,col), str):
                record[first_row[col]] = worksheet.cell_value(row,col).strip()
            else:
                record[first_row[col]] = worksheet.cell_value(row,col)
        data.append(record)
    return data


def UseOpenpyxl(file_name):
    wb = openpyxl.load_workbook(file_name, read_only=True)
    sheet = wb.active
    first_row = []
    for col in range(1,sheet.max_column+1):
        first_row.append(sheet.cell(row=1,column=col).value)
    data = []
    for r in range(2,sheet.max_row+1):
        record = {}
        for col in range(sheet.max_column):
            if isinstance(sheet.cell(row=r,column=col+1).value, str):
                record[first_row[col]] = sheet.cell(row=r,column=col+1).value.strip()
            else:
                record[first_row[col]] = sheet.cell(row=r,column=col+1).value
        data.append(record)
    return data

xlrd_results = UseXlrd('foo.xls')
openpyxl_resuts = UseOpenpyxl('foo.xls')

Passing the same Excel file containing 3500 rows gives drastically different run times. Using xlrd I can read the entire file into a list of dictionaries in under 2 second. Using openpyxl I get the following results:

Reading Excel File...
Read 100 lines in 114.14509415626526 seconds
Read 200 lines in 471.43183994293213 seconds
Read 300 lines in 982.5288782119751 seconds
Read 400 lines in 1729.3348784446716 seconds
Read 500 lines in 2774.886833190918 seconds
Read 600 lines in 4384.074863195419 seconds
Read 700 lines in 6396.7723388671875 seconds
Read 800 lines in 7998.775000572205 seconds
Read 900 lines in 11018.460735321045 seconds

While I can use xlrd in the final script, I will have to hard code a lot of formatting because of various issues (i.e. int reads as float, date reads as int, datetime reads as float). Being that I need to reuse this code for a few more imports, it doesn't make sense to try and hard code specific columns to format them properly and have to maintain similar code across 4 different scripts.

Any advice on how to proceed?

Mike Müller
  • 82,630
  • 20
  • 166
  • 161
Ron Johnson
  • 303
  • 1
  • 3
  • 7
  • 2
    Mike has already provided the solution but here's the reason for poor performance: the way you're accessing cells is causing openpyxl to repeatedly parse the original spreadsheet. read-only mode is optimised for row-by-row access. – Charlie Clark Mar 06 '16 at 10:31
  • 1
    when i read your description "I have an Excel spreadsheet that I need to import into SQL Server on a daily basis" - it sounds to me like a perfect candidate for Pandas: read about `pandas.read_excel()` and `pandas.DataFrame.to_sql()` functions. And AFAIK Pandas uses `xlrd` internally – MaxU - stand with Ukraine Mar 06 '16 at 10:39
  • To follow-up on Charlie Clark's answer, the source of the behavior is in the use of `max_column,` which is implemented in an inefficient way, inside a loop. See: https://foss.heptapod.net/openpyxl/openpyxl/-/issues/1587 – Brandon Kuczenski Dec 02 '21 at 20:02

3 Answers3

22

You can just iterate over the sheet:

def UseOpenpyxl(file_name):
    wb = openpyxl.load_workbook(file_name, read_only=True)
    sheet = wb.active
    rows = sheet.rows
    first_row = [cell.value for cell in next(rows)]
    data = []
    for row in rows:
        record = {}
        for key, cell in zip(first_row, row):
            if cell.data_type == 's':
                record[key] = cell.value.strip()
            else:
                record[key] = cell.value
        data.append(record)
    return data

This should scale to large files. You may want to chunk your result if the list data gets too large.

Now the openpyxl version takes about twice as long as the xlrd one:

%timeit xlrd_results = UseXlrd('foo.xlsx')
1 loops, best of 3: 3.38 s per loop

%timeit openpyxl_results = UseOpenpyxl('foo.xlsx')
1 loops, best of 3: 6.87 s per loop

Note that xlrd and openpyxl might interpret what is an integer and what is a float slightly differently. For my test data, I needed to add float() to make the outputs comparable:

def UseOpenpyxl(file_name):
    wb = openpyxl.load_workbook(file_name, read_only=True)
    sheet = wb.active
    rows = sheet.rows
    first_row = [float(cell.value) for cell in next(rows)]
    data = []
    for row in rows:
        record = {}
        for key, cell in zip(first_row, row):
            if cell.data_type == 's':
                record[key] = cell.value.strip()
            else:
                record[key] = float(cell.value)
        data.append(record)
    return data

Now, both versions give the same results for my test data:

>>> xlrd_results == openpyxl_results
True
Mike Müller
  • 82,630
  • 20
  • 166
  • 161
  • 1
    Actually, you can just iterate over the sheet. Furthermore, openpyxl already does the type conversion so you can check the cell data_type. – Charlie Clark Mar 06 '16 at 10:21
  • 2
    Also, it's probably worth noting that xlrd must read a file into memory, whereas openpyxl in read-only mode will allow you to stream row-by-row. – Charlie Clark Mar 06 '16 at 10:28
  • 2
    You might also see some performance improvements if you were testing with v2.0. The last time I compared the two I found openpyxl to be only slightly slower than xlrd: it's doing more and in constant memory. – Charlie Clark Mar 06 '16 at 13:47
  • I am working with 2.3.2 on Python 3.5. This the latest version I can currently get via conda. – Mike Müller Mar 06 '16 at 14:04
  • Okay. Only minor changes in 2.3.3 – Charlie Clark Mar 06 '16 at 15:46
  • I switched to iterating and find it to be orders of magnitude faster than before – ohthepain Mar 14 '19 at 13:23
0

You call several times of "sheet.max_column" or "sheet.max_row". Don't do that. Just call it once. If you call it on for loop, each time it will calculate once max_column or max_row.

I modify as below for your reference:

def UseOpenpyxl(file_name):
    wb = openpyxl.load_workbook(file_name, read_only=True)
    sheet = wb.active
    max_col = sheet.max_column
    max_row = sheet.max_row
    first_row = []
    for col in range(1,max_col +1):
        first_row.append(sheet.cell(row=1,column=col).value)
    data = []
    for r in range(2,max_row +1):
        record = {}
        for col in range(max_col):
            if isinstance(sheet.cell(row=r,column=col+1).value, str):
                record[first_row[col]] = sheet.cell(row=r,column=col+1).value.strip()
            else:
                record[first_row[col]] = sheet.cell(row=r,column=col+1).value
        data.append(record)
    return data
soartseng
  • 243
  • 1
  • 2
  • 6
-2

It sounds to me like a perfect candidate for Pandas module:

import pandas as pd
import sqlalchemy
import pyodbc

# pyodbc
#
# assuming the following:
# username: scott
# password: tiger
# DSN: mydsn
engine = create_engine('mssql+pyodbc://scott:tiger@mydsn')

# pymssql
#
#engine = create_engine('mssql+pymssql://scott:tiger@hostname:port/dbname')


df = pd.read_excel('foo.xls')

# write the DataFrame to a table in the sql database
df.to_sql("table_name", engine)

Description for DataFrame.to_sql() function

PS It should be pretty fast and very easy to use

MaxU - stand with Ukraine
  • 205,989
  • 36
  • 386
  • 419
  • 9
    Pandas uses xlrd internally and is pretty inflexible as a result. Note that this is of particular concern to the original poster. – Charlie Clark Mar 06 '16 at 13:44
  • 2
    I agree with CharlieClark. I just wanted to mention that openpyxl has support for Pandas DataFrames. One can use the DataFrame() function from the Pandas package to put the values of a sheet into a DataFrame: `import pandas as pd` and then `df = pd.DataFrame(sheet.values)` so using both libraries to import and then work on the data is a better idea rather than just trying to choose one. – Ibo Sep 26 '17 at 17:20