3

I have simple db with 4 tables with a few millions rows each, and several indexes. I'm performing several hundred update and inserts on them per minute. The reads are much less frequent, but they need to be fast - for a web app. The reads should have priority - I can delay writes, if it helps improve the snappiness of reading.

Currently, when I'm not writing inserts and updates, the selects all hum along nicely. When I'm writing at the same time, things can slow down - sometimes massively. The server definitely gets IO bound - I've used iostat and seen the disk unitilization at 99% during periods of high writing.

Tomorrow I'm going to try and cut an index or two, shrink the row size, and disable the query cache. Does anyone have any other suggestions on how the table or mysql itself should be tuned for lots of writes and few reads?

The tables are currently set up to use the innodb engine with compact rows, and most of the config is still set to default apart from the buffer pool size. The db is going to continue to grow quickly, so having it all sit in ram isn't an option.

Update: It's on slicehost.com - 1gb ram, raid 10.

Tim Haines
  • 1,496
  • 3
  • 14
  • 16

7 Answers7

1

Indexes are going to slow the writes, but are necessary for read performance, so as few as you can get away with to support the reads. Is your clustered index going to cause a lot of slow down?

Another possibility is to read from a separate database/table from your writes and opt for eventual consistency - that may not be possible in your case.

Cade Roux
  • 88,164
  • 40
  • 182
  • 265
  • How would a clustered index cause slow downs? It's a big int, so adding minimal length to the secondary indexes. I would mind if the majority of new writes were only available in 3 minute chunks or something - most of it doesn't need to be instantly available. – Tim Haines Jun 14 '09 at 14:33
  • Sorry - I said 'I would mind' above. I meant 'I wouldn't mind'. – Tim Haines Jun 14 '09 at 14:33
  • A clustered index would cause slowdowns if it caused a lot more writes due to the insertion order. If it's a autoincrementing big-int, you shouldn't see a problem there. – Cade Roux Jun 14 '09 at 15:00
  • It's not auto-incrementing, but it should be generally incrementing - most entries near the max value. – Tim Haines Jun 14 '09 at 15:34
1

One thing (from many) to consider is using transactions. If you can bundle several write operations under one transaction it should lower the number of disk access.

Itay Moav -Malimovka
  • 52,579
  • 61
  • 190
  • 278
  • I can't upvote yet otherwise I would. Thanks for the suggestion. It brought up some interesting google results that look like they'll help. (Including http://dev.mysql.com/doc/refman/5.1/en/innodb-tuning.html) – Tim Haines Jun 14 '09 at 14:52
1

Unfortunately for you, MySQL is typically built for 80/20 read/write ratio. I don't know if there is a lot you can do.

Are you using transactions ?

If the data you select is not often affected by writing (so that modifying it at write time when there is a modification would not affect writing performance), you can externalize it at write time, e.g. at the end of the transaction.

instanceof me
  • 38,520
  • 3
  • 31
  • 40
  • I'm not yet using transactions on the multiple writes, but I'll batch them into transactions tomorrow to see if that helps. I'll also look at increasing the log size and log buffer as per http://dev.mysql.com/doc/refman/5.1/en/innodb-tuning.html – Tim Haines Jun 14 '09 at 14:59
  • Okay, please give us feedback afterwards. – instanceof me Jun 14 '09 at 19:23
1

I think you need to consider partitioning. It's pretty much the only way to scale writes. MySQL has native support for this from 5.1 and onwards, but it's also quite possible to roll your own solution. The latter is much more complicated, so if possible I'd recommend using the built-in support. However, considering your excessive write load, it might not be enough. It's hard to give you more detailed advice without knowing how the data is structured, though.

Emil H
  • 39,840
  • 10
  • 78
  • 97
  • Wow - I had a total misconception on what partitioning was, so hadn't even bothered to read about it until now. Sounds great - will try it out tomorrow. Thanks for the suggestion. – Tim Haines Jun 14 '09 at 15:22
  • You might also want to search for "sharding". It's pretty much the same thing as partitioning, but emphasizes that it's a custom solution. I think you'll be able to find additional resources that way. – Emil H Jun 14 '09 at 16:18
0

You have already done some useful things (fewer indexes, smaller datatypes)

  • innodb_buffer_pool_size should be about 70% of available RAM. (In old versions, the default was too low.) But for your tiny server (1GB RAM), 150M might be too big. Consider getting more RAM.
  • If you are swapping, get more RAM or lower more settings.
  • Consider innodb_flush_log_at_trx_commit = 2 -- faster, though less data security
  • Batch inserts -- 100 inserts in a single query will run 10 times as fast as 100 single-row inserts.
  • Look at normalization or denormalization
  • Check for hiding indexed columns in function calls: WHERE DATE(col) = '2019-01-01'
  • Use the slowlog to identify the naughtiest queries -- they may be write or read.
  • Did I suggest getting more RAM?

There are lots of other tips, but it would be better to see the worst queries, plus SHOW CREATE TABLE.

Rick James
  • 135,179
  • 13
  • 127
  • 222
0

I suggest you do write/read splitting by setting up a mysql master-slave configuration.

You write to the master and redirect reads to the slaves.

The splitting itself can be done in two ways

  1. Use a proxy (MySQL Proxy, Continuent/Sequoia, ..)
  2. Do the splitting yourself in your application
jitter
  • 53,475
  • 11
  • 111
  • 124
  • Seems quite pointless to me. All the servers still have to handle the same write load. If reads are unfrequent it seems unlikely that replication will help. – Emil H Jun 14 '09 at 14:40
  • Thanks for responding to this Emil. I'd like to avoid pointless goose chases if I can. You may have saved me some hours here. Does anyone agree with Jitter's suggestion? – Tim Haines Jun 14 '09 at 14:55
  • Galera Cluster (3 writable nodes) and "RBR" may give a tiny help with write scaling. – Rick James Jul 24 '19 at 18:24
0

If MySQL supported an index Fill Factor, that would be one area to look at. Unfortunately version 5 doesn't support Fill Factor (apparently it is on the feature request list for version 6.x).

  • Removing any un-used indexes, and limiting the width of your indexes would help.

  • Examine how much memory the server has.

  • Is the Disk RAID'ed?

Community
  • 1
  • 1
Mitch Wheat
  • 295,962
  • 43
  • 465
  • 541
  • The server has 1gb of memory, and is on Raid 10. I won't be able to keep increasing the memory to fit the db - the db will continue to outgrow it. Is there a minimum I should be looking at? What role does RAID play? – Tim Haines Jun 14 '09 at 14:57
  • @Tim Haines: 1GB is quite small. Suggest you use at least 2GB – Mitch Wheat Jun 15 '09 at 00:40