13

I've just upgraded my Heroku postgres database from the Kappa plan (800MB RAM, postgres 9.1) to the Ronin plan (1.7GB RAM, postgres 9.2), but performance has degraded.

Following the guide here, I checked and the cache hit rate is even lower than it was with our Kappa database (now ~57%, previously ~69%). Our app design should be decently ok, as we've seen a cache hit rate of ~99% before.

The recommendation is that the data set should be able to fit into memory, which shouldn't be a problem now - our data size is 1.27GB (at least most of it should fit).

Is the low cache hit rate due to the data size, or is there something else I can look into? Or is it simply a case of the database cache not fully warmed up? (it's been almost 2 days)

Daniel
  • 253
  • 1
  • 4
  • 10
  • 1
    What is the old and current value of *effective_cache_size*? And *work_mem*? – Beryllium May 29 '13 at 08:24
  • 3
    Also: are your statistics up to date ? (have you run `VACUUM ANALYZE` after the import/conversion ? ) – wildplasser May 29 '13 at 09:03
  • @Beryllium effective_cache_size 900000kB/1170000kB, work_mem 50MB/10MB (before/after) – Daniel May 30 '13 at 03:41
  • @wildplasser I thought heroku has an autovacuum daemon, but good that you brought it up - checked last_vacuum and last_autovacuum and saw that only a few tables had it. Running it now, will see how it goes. edit: done, cache hit rate is still about the same. – Daniel May 30 '13 at 03:43
  • So it's not the *effective cache size*. Have you tried to increase *work_mem*? – Beryllium May 30 '13 at 07:33
  • I don't think I have access the conf file, and if I'm not wrong `set work_mem = '50MB';` is only for the current transaction. is there a way to set it through the psql console? – Daniel Jun 01 '13 at 15:01

1 Answers1

7

If you have plenty of memory and are not running much else on the db, one thing that may change is the shared_buffers. What the shared buffers do is they cache frequently used data so that it maximizes throughout when not all of the database will fit in memory.

Unfortunately this cache does not perform as well as he OS cache. If your data will easily fit in memory, make sure that effective_cache_size is high enough, and then try reducing shared_buffers

Note that this is not a magic bullet. The appropriate size of shared_buffers depends on how much data you have, how much space it takes up, your types of queries, how much memory is going towards things like sorts and the like. You can expect to play around with this from time to time to find the sweet spot for your current setup and database.

Chris Travers
  • 25,424
  • 6
  • 65
  • 182