13

I am developing a program in Python that accesses a MySQL database using MySQLdb. In certain situations, I have to run an INSERT or REPLACE command on many rows. I am currently doing it like this:

db.execute("REPLACE INTO " + table + " (" + ",".join(cols) + ") VALUES" +
    ",".join(["(" + ",".join(["%s"] * len(cols)) + ")"] * len(data)),
    [row[col] for row in data for col in cols])

It works fine, but it is kind of awkward. I was wondering if I could make it easier to read, and I found out about the executemany command. I changed my code to look like this:

db.executemany("REPLACE INTO " + table + " (" + ",".join(cols) + ") " + 
    "VALUES(" + ",".join(["%s"] * len(cols)) + ")",
    [tuple(row[col] for col in cols) for row in data])

It still worked, but it ran a lot slower. In my tests, for relatively small data sets (about 100-200 rows), it ran about 6 times slower. For big data sets (about 13,000 rows, the biggest I am expecting to handle), it ran about 50 times slower. Why is it doing this?

I would really like to simplify my code, but I don't want the big drop in performance. Does anyone know of any way to make it faster?

I am using Python 2.7 and MySQLdb 1.2.3. I tried tinkering with the setinputsizes function, but that didn't seem to do anything. I looked at the MySQLdb source code and it looks like it shouldn't do anything.

Elias Zamaria
  • 96,623
  • 33
  • 114
  • 148
  • how many rows are you inserting/replacing? your second statement creates a huge list in memory before feeding it to mysql. – nosklo Oct 15 '10 at 19:59
  • 1
    I am replacing up to 13,000 rows. I don't think creating the list is the bottleneck. If I create the list but don't pass it to the db cursor, it barely takes any time at all. – Elias Zamaria Oct 15 '10 at 20:14
  • (Won't answer the question, but...) `INSERT ... ON DUPLICATE KEY UPDATE ...` is almost always better than `REPLACE ...`. – Rick James May 26 '17 at 05:24

4 Answers4

22

Try lowercasing the word 'values' in your query - this appears to be a bug/regression in MySQL-python 1.2.3.

MySQL-python's implementation of executemany() matches the VALUES clause with a regular expression and then just clones the list of values for each row of data, so you end up executing exactly the same query as with your first approach.

Unfortunately the regular expression lost its case-insensitive flag in that release (subsequently fixed in trunk r622 but never backported to the 1.2 branch) so it degrades to iterating over the data and firing off a query per row.

SimonJ
  • 21,076
  • 1
  • 35
  • 50
  • I tried that and it works! With "values" in lowercase, it is about as fast with executemany as it is with execute, or sometimes a little faster. – Elias Zamaria Oct 17 '10 at 00:59
  • 2
    Note that the 1.2.3 regex doesn't work with arguments in ON DUPLICATE KEY UPDATE queries (the regex only matches the first arguments), so lower-casing values can lead to confusing (because they work with execute()) "not all arguments converted during string formatting" errors. To avoid, use the VALUES() format rather than arguments in the ON DUPLICATE KEY part of the query. – Tony Meyer Apr 12 '11 at 22:09
  • It was fixed in [1.2.4](https://github.com/farcepest/MySQLdb1/blob/MySQLdb-1.2.4/MySQLdb/cursors.py#L43). – saaj Sep 27 '15 at 13:02
  • awesome.. Thanks a lot :) – gsuresh92 May 12 '16 at 15:05
  • Great catch, and very strange quirk. Here's a great explanation of how to handle this error: https://stackoverflow.com/a/26372066/6163621 – elPastor Mar 04 '19 at 03:23
1

Strongly do not recommend to use executeMany in pyodbc as well as ceodbc both slow and contains a lot of bugs.

Instead consider use execute and manually construct SQL query using simple string format.

transaction = "TRANSACTION BEGIN {0} COMMIT TRANSACTION

bulkRequest = ""
for i in range(0, 100)
    bulkRequest = bulkRequest + "INSERT INTO ...... {0} {1} {2}"

ceodbc.execute(transaction.format(bulkRequest))

Current implementation is very simple fast and reliable.

Wild Goat
  • 3,509
  • 12
  • 46
  • 87
1

Your first example is a single (large) statement that is generated and then sent to the database.

The second example is a much simpler statement that inserts/replaces a single row but is executed multiple times. Each command is sent to the database separately so you have to pay the turnaround time from client to server and back for every row inserted. I would think that this extra latency introduced between the commands is the main reason for the decreased performance of the second example.

Mark Byers
  • 811,555
  • 193
  • 1,581
  • 1,452
  • That is what I suspected. I thought maybe the executemany function is sophisticated enough to send the commands all in one query, but it doesn't seem like it. – Elias Zamaria Oct 15 '10 at 20:15
1

In case you're using mysqlclient-python (fork of MySQLdb1), also the recommended driver for Django (by Django), there's the following usecase you need to know of:

cursor.executemany falls back to using cursor.execute (silently) in case your query is of the form:

INSERT INTO testdb.test (type, some_field, status, some_char_field) VALUES (%s, hex(%s), %s, md5(%s));

The driver employs a python regex that doesn't seem to support the use of mysql functions in the VALUES clause.

    RE_INSERT_VALUES = re.compile(
    r"\s*((?:INSERT|REPLACE)\b.+\bVALUES?\s*)" +
    r"(\(\s*(?:%s|%\(.+\)s)\s*(?:,\s*(?:%s|%\(.+\)s)\s*)*\))" +
    r"(\s*(?:ON DUPLICATE.*)?);?\s*\Z",
    re.IGNORECASE | re.DOTALL)

Link to the relevant github issue https://github.com/PyMySQL/mysqlclient-python/issues/334

ainvehi
  • 41
  • 6