1

I use fabric to upload to a server the new versions of my code, then it runs a couple of tasks to make it serve the new version instead of the old one.

Among those tasks there's also a syncdb, but this means that there's a window where I have the new code version with the old database tables (or the old code version with the new synced database).

Question: Should I need to manually copy the database, sync with the copy and then all at once replace the old code version and the original database?

Since this seems like a very common problem I'm brought to think that there should already be tools or specific approaches for this problem. Does someone knows any?

(Another concern of mine is that something during the deploy could go wrong and I'd like to fall back to the previous state without having a broken db not synced with the code).

Rik Poggi
  • 28,332
  • 6
  • 65
  • 82

1 Answers1

1

It is difficult to achieve zero downtime when your deployment includes db schema migration. As an example, people at etsy.com use an indexed, sharded master-master pair, a custom shard-aware ORM, they don't let the db generate primary keys/timestamps etc and don't enforce foreign keys in db. http://codeascraft.etsy.com/2012/04/20/two-sides-for-salvation/

This is much easier if you can tolerate some planned downtime for deployment, even 15minutes. (assuming you have scripted db schema upgrades using tools like dbdeploy etc).

ottodidakt
  • 3,631
  • 4
  • 28
  • 34
  • That was a really interesting read, thanks for sharing your knowledge! My architecture isn't at that kind of level so I'll have to try to keep the downtime as low as possible. – Rik Poggi Jun 18 '12 at 12:34