1

Our Rails app silos client data using the Apartment gem, which creates a schema for each tenant. We are on Postgres 9.4 on Heroku.

Everything worked fine until we got to about 600 tenants, when suddenly our automatic pg backups stopped working. We were informed by Heroku support that they could not perform backups because the total number of tables given that number of schemas (somewhere around 30,000) caused pg_dump to experience an OOM error. (Our attempts to directly do a pg_dump to our local machines also fails.)

Heroku proposed that we could backup our data by going schema-by-schema, e.g., in a bash script looping through every schema name:

pg_dump -Fc "<db_credentials>" -n $schema >"$schema.dump"

This works, but based on our calculations will take more than 40 hours just to download all of our data (assuming everything goes right). This is a bit frustrating, because the actual amount of data is peanuts (about 4GB uncompressed).

We want to consider exporting our database to Amazon RDS, but our company doesn't think that our customers would tolerate the app being down for two days (and that's assuming everything goes right). We are growing, and the potential migration downtime will only get worse, effectively locking us into Heroku.

Questions:

  1. Is there any way we can speed up this process? (FWIW, doing an uncompressed pg_dump doesn't seem to improve performance.)

  2. Is this just standard database migration downtime, and the devs just need to tell the company that they have to deal with the downtime?

niborg
  • 390
  • 4
  • 16

0 Answers0