5

We have been experiencing an occasional massive slow down in UPDATE performance across our database.

For example, the table FooTable we have about 40 columns with a varchar PK in addition there are 10 indexes. The following query took 44 seconds, while at other times it run's virtually instantly. During the slow down the load average on the server is quite low (1.5 for the 5 minute average) and IO as per vmstat is fairly reasonable as well.

Here an an example:

mysql> update FooTable set BarColumn=1349981286086 where varcharPK='e4348411-0fbb-460a-80f7-f1de304a9f7c'
Query OK, 1 row affected (44.94 sec)
Rows matched: 1  Changed: 1  Warnings: 0

mysql> show profile for QUERY 1;
+----------------------+-----------+
| Status               | Duration  |
+----------------------+-----------+
| starting             |  0.000030 |
| checking permissions |  0.000004 |
| Opening tables       |  0.000007 |
| System lock          |  0.000003 |
| Table lock           |  0.000003 |
| init                 |  0.000035 |
| Updating             | 44.949727 |
| end                  |  0.000006 |
| query end            |  0.000003 |
| freeing items        |  0.000115 |
| logging slow query   |  0.000002 |
| logging slow query   |  0.000052 |
| cleaning up          |  0.000003 |
+----------------------+-----------+
13 rows in set (0.02 sec)

For what it's worth the example query above was run on a InnoDB table that was 'rebuilt' (ALTER TABLE FooTable ENGINE=InnoDB;) less then a week ago. I initially suspected that this was related to InnoDB's known performance issues with varchar/non sequential PKs, however we have other tables that use sequential PKs and have seen the same issues.

This is on a production server: Centos 5 2.6.18-238.19.1.el5 x86_64 MySQL/Percona 5.1.57-rel12.8-log 96GB of memory with 58.8G of data spread throughout 87 tables

The relevent InnoDB settings are as follows:

innodb_flush_log_at_trx_commit  = 2
innodb_buffer_pool_size         = 70G
innodb_log_file_size            = 512M
innodb_log_buffer_size          = 64M
innodb_file_per_table           = 1
innodb_thread_concurrency       = 0
innodb_flush_method             = O_DIRECT
innodb_read_io_threads          = 64
innodb_write_io_threads         = 64
optimizer_search_depth          = 0
innodb_file_format              = barracuda

I am not using FORMAT=COMPRESSED on this table, however I am on others.

Any suggestions on how to figure out what is going on in the Updating phase that is taking so long?

Jeremy
  • 2,870
  • 3
  • 23
  • 31
  • Does `EXPLAIN` work with `UPDATE`? Do you get the same performance with a `SELECT BarColumn FROM FooTable WHERE varCharPK=...`? I'm wondering if it's doing something weird with the indexes. – Brendan Long Oct 12 '12 at 16:18
  • You can't run an explain on an UPDATE, however if I convert it to a SELECT it looks as it should, it's using the key PRIMARY, rows 1. – Jeremy Oct 12 '12 at 17:08
  • Try to do a "SHOW PROCESSLIST" while that query is running. It may show other queries that have a lock on the table. – bobwienholt Oct 15 '12 at 15:51
  • If you did the same 'update' sequence multiple times at different points in a day - would the performance be the same? (IE control for what gets inserted and test other factors) – ethrbunny Oct 15 '12 at 16:19
  • @ethbunny No, otherwise it runs very quickly. I'm currently leaning towards too large/long running transactions. – Jeremy Oct 15 '12 at 18:19
  • @bobwinholt There isn't anything running that is locking the table, I've been monitoring mainly with InnoTop in query very and the lock view. – Jeremy Oct 15 '12 at 18:20

2 Answers2

1

The root cause appears to have been long running application transactions. The solution was to break down the large transactions into smaller units of work.

Jeremy
  • 2,870
  • 3
  • 23
  • 31
0

How about (converting varcharPK to text and) using hash index on varcharPK-field?

alter table FooTable add KEY pk_index (varcharPK(100)) USING HASH

I had similar problem once and this helped. I'm not sure it'll help you. Sound's like it'd also be helpful if you could split the table - it has "too much" data.

viljun
  • 370
  • 3
  • 12
  • Can you explain the logic behind this change please? – Jeremy Oct 15 '12 at 22:16
  • Using hashed index take less index space and thus is faster to search and maintain. Check this: http://stackoverflow.com/questions/12689460/how-to-index-innodb-text-in-mysql – viljun Oct 22 '12 at 18:55
  • Also text is not stored with table data (like varchar) and thus it'll also make everything go faster. – viljun Oct 22 '12 at 18:57