3

What about this idea:

  • I've got a 4GB MySQL database, with few UPDATEs.
  • On system startup I copy it to RAM and run it from there.
  • Every couple of hours or even on UPDATES, I dump it to disk.

...as a kind-of poor-man's replacement for revisiting all code routines using the db and implementing memcached or similar.

isync
  • 703
  • 2
  • 8
  • 20
  • 1
    Why would you use up your RAM for this when you could just actually run memcached? You're attempting to subvert the memory and disk routines in SQL by doing this, and you could have issues if there's a shutdown on the server. – Bart Silverstrim Dec 13 '11 at 17:22
  • I like memcached, but always was afraid of the migration work. But tell me for new projects: What's the proper tool/scheme in memcached that syncs back data from the "in-flight memcached store" to "reboot/crash safe" disk? – isync Dec 13 '11 at 17:38
  • The way that you use memcached for reads -> 1. client requests data. 2. client looks to see if it is in memcached 3. if it is in memcached, return the data, if not 4. read it from the database 5. return to client, 6. write data to memcached. For writes, you just write to both memcached and the database. – Aaron Brown Dec 13 '11 at 17:57
  • Ah, of course: it had cross-breeds like MemcacheDB in mind - so the programmer is only faced with one db-storage-API, but it might make more sense to use these two simple APIs from the code instead of relying remixes like MemcachedDB... – isync Dec 13 '11 at 18:38

3 Answers3

10

Use the the MEMORY storage engine on a read only slave to do your reads from, is exactly what you really want and a sane setup. Forget "dumping it to disk" (?!) or other strange things.

You can even put the slave as another instance on your existing server if you can't afford to setup a dedicated slave, but properly tuning the MySQL parameters for mostly read workloads will bring a significant performance enhancement too!

pfo
  • 5,700
  • 24
  • 36
  • And read-only-slaves properly sync with the master without much fiddling? (Properly meaning they stay-sync'ed and without much overhead)? – isync Dec 13 '11 at 17:36
  • properly as in synchronous replication?! we're talking mysql here ... – pfo Dec 13 '11 at 17:40
  • Of course ;) I was working on a MS SQL-Server lately and it made me so paranoid, that I got wary of MySQL. How foolish. Now I am back! MySQL rules! =) – isync Dec 13 '11 at 18:42
  • 1
    This is *not* a sensible solution! Why run a slave on the same box and introduce potential replication issues when InnoDB will happily run with hundreds of GBs of data in the buffer pool? http://www.mysqlperformanceblog.com/2007/11/03/choosing-innodb_buffer_pool_size/ – Andy Dec 14 '11 at 11:35
4

use innodb engine for all your tables, in my.cnf set innodb_buffer_pool_size to 2-3GB of ram, during startup run full table scan for all tables [or just mysqldump -A]. this should give you very decent read-only performance if you have good indices on all tables.

also if you go with this solution your code will be simpler - you'll not have to take care of sync'ing back from in-MEMORY tables back to myisam/on-disk tables.

pQd
  • 29,981
  • 6
  • 66
  • 109
  • I should really think about moving away from MyISAM, as it seems... – isync Dec 13 '11 at 18:43
  • @isync few reasons to move away from it - better handling for concurrency in case you have some updates, better handling of recovery from crashes / unclean shutdowns. – pQd Dec 13 '11 at 18:56
2

Shot in the dark, but are you actually experiencing performance issues first?

Second, if you're having performance issues, you're going to have to look at tuning. Tune the application (if you're the dev or have strangle-privileges on the devs) to use proper queries so you're not using huge JOINs and such.

Then look at database architecture. Making goofy workarounds will make it a bigger headache to manage later. The steps you take depends on your usage case. You might consider getting a beefy server to scale vertically, then put it behind a memcached server, and create a slave read-only for things like running reports and backups.

Don't take away RAM from the database to try a scheme that subverts good practice. There are blogs for high-performance database tuning available to give ideas and the StackOverflow podcast blog has stats on how this site runs (MS Stack); relatively heavy traffic, but not a complicated setup.

Bart Silverstrim
  • 31,172
  • 9
  • 67
  • 87