2

This is a methodology question from a freelancer, with a corollary on MySQL.. Is there a way to migrate from an old dedicated server to a new one without losing any data in-between - and with no downtime? In the past, I've had to lose MySQL data between the time when the new server goes up (i.e., all files transferred, system up and ready), and when I take the old server down (data still transferred to old until new one takes over). There is also a short period where both are down for DNS, etc., to refresh.

Is there a way for MySQL/root to easily transfer all data that was updated/inserted between a certain time frame?

Brian Tompsett - 汤莱恩
  • 5,753
  • 72
  • 57
  • 129
ina
  • 19,167
  • 39
  • 122
  • 201

5 Answers5

1

I'd make a sorry page, put it up on the old server, transfer all data to the new one and then switch DNS. Though there will be a downtime.

Your Common Sense
  • 156,878
  • 40
  • 214
  • 345
  • Is there a way to put up a "global sorry page" - lots of mysql processes initiated from many different php/other scripts in multiple directories on multiple domains on server... – ina Jun 07 '10 at 11:04
  • I'd make a simple mod_rewrite rule to redirect all requests to the sorry page for the all domains – Your Common Sense Jun 07 '10 at 11:12
1

The longest downtime is from DNS switch - can take several hours and even days till all clients caches are expired.

To avoid it:

  1. set up application on new server to access DB on old one, or just proxy http requests with nginx to the old one, depending on what is more acceptable.
  2. then goes DNS switch, some clients go to ld server, some to new, here you can wait for 24+ hours to make sure all requests go to new server
  3. While DNS switches - rehearse mysql transition.
    • make a 'sorry/maintanance page', there're plenty of guides how to do that usung rewrites. You'll need it anyway
    • measure how fast you can dump-transfer-restore db, if time is acceptable - this is the simplest, but remember to give some margin
    • if previous is too slow - you can try binlog method suggested in previos answer
    • minimal downtime can be achieved by making new server a mysql slave to the old one, under the hood it is just downloads binlog from master on-the-fly and you will save time on transferring the whole log, most probably during minimal load slave will be just several seconds behind master and catch up very quickly once app is taken down, see how to force slave to catch up.
  4. Write a script, that does all transition for you - enables maintenance mode, locks master db, waits till slave catches up, makes slave a new master, replaces app config with new db, disables maintenance, switches app etc. This way you save time on typing commands youself, test on staging environment to avoid possible errors (also remember to set larger mysql timeout, just in case slave is a lot behind)
  5. here goes the transition itself, by running script from previous step

Also if you use file uploads to a local filesystem - these need to be synced too and on lots of files this is more pain than with db, because even rsync scan for changes can take a lot of time.

Vasfed
  • 18,013
  • 10
  • 47
  • 53
1

What I like to do is close the site and starting to move DB to other server using these commands: 2, then move all files (php ..etc) to the other server (if you have some store data or change files every hour, like image upload). and point the old server to the new DB server while the DNS is changing to all to the new server.

Qatari
  • 11
  • 1
0

Check out the MySQL binary log.

Emil Vikström
  • 90,431
  • 16
  • 141
  • 175
0

Sure. Enable bin logging on the source server. After that is started, make a DB dump and transfer it to the new server and import it. Then, when you're ready to make the switch, change DNS (let the change propagate while you're working), then take the site down on both servers. Copy the binlogs to the new server and run them again starting at the date/time of the dump.

Micah Yoder
  • 695
  • 1
  • 5
  • 8
  • Prob with this is that my MySQL db's are ~2GB so far, so there would be at least half an hour of downtime between transferring the db. Is there a way to select/dump just the entries that have been modified across all db/table's? – ina Jun 11 '10 at 11:05