1

I am considering switching to a new hosting provider and I would like to know if it is possible to achieve database (MySQL 5.6) migration without a huge downtime (= time to copy tens of GB of DB files from one hosting to other hosting).

So, is it possible to configure the current live MySQL DB as master and the new machine as slave in some mode that the master would not wait with new data inserts or updates for slave to confirm, and the slave would slowly (i.e. would not consume too many resources of master DB machine) try to sync itself, i.e. it will take up to a whole day, or maybe a couple of days (full speed file copy would take cca 4 hours) to fully sync.

Meanwhile I would setup the web server and other services on the new machine and then just switch DNS and switch slave to single master mode and disconnect the old machine. I expect (and I am OK with that) to lose some data during the actual DNS switch (some clients with old DNS record would access the old server and these changes would not replicate to the new machine), but for most visitors, this window would be 15 minutes or so.

So, is something like this possible and somehow easy to do? Alternative is cca 4 hours of downtime, copy all files to new server and just start it, but I am not very happy with such a long downtime.

I do not mind restarting DB service/daemon couple of times during this process in order to switch it to new configuration.

I do not want to do this migration using dumps, when I have to resync tables manually.

Wapac
  • 652
  • 1
  • 5
  • 16

3 Answers3

1

you can get away with the snapshot dump approach:

enable log-bin and make a mysqldump --master-data=1 --all-databases at the master - this will save the bin-log positions into the dump. depending on your DB types (only innodb can use the --single-transaction), this will read-lock the DB at max for the duration of the dump. no further action on the master needed really.

fill the slave with the dump, and after CHANGE MASTER TO ...; SLAVE START; it should start catching up to the running master. once it's synced, you can make the switch.

you can also make an iptables DNAT port redirection to the slave, to avoid DNS-based delays.

rogerovo
  • 264
  • 2
  • 7
  • So you say 1st thing to do is to enable log-bin and 2nd thing to do is to use dump with --master-data flag set to 1. Can I use --quick and --lock-tables=false with it? And will this way of dumping and then replication respect deletes too? I mean will the deleted rows be deleted from the new machine during the replication? – Wapac Jul 23 '15 at 16:54
  • you can use --quick, that doesn't affect the DB, it's just the script output buffering. lock-tables - no. master-data forces a read-lock on all tables. otherwise you won't get a binary log consistent dump. once started all changes will be replicated. – rogerovo Jul 23 '15 at 20:58
  • Thanks for clarification. One more question here - if there is a read-lock on all tables, does it limit the online system in any way? I mean the server is going to be online during the whole dumping process, no downtime. Will such a locked DB work with no problems while this lock is there? I.e. would the application using DB notice anything? – Wapac Jul 24 '15 at 07:14
  • the DB will delay updates, depending on the application it might just hang and wait, timeout, or fail right away. so yes, there probably will be disruptions. The problem is, you have to have a fixed point in time for ALL the data. otherwise you can't guarantee they will be in sync – rogerovo Jul 24 '15 at 09:38
  • I understand that, but this is why the dump is useless in this scenario, because we need no long downtime. Does that leave us with no dump and replication setup only? I.e. start to replicate with empty new machine DB. – Wapac Jul 24 '15 at 09:57
  • after the dump is made, the master will resume normal operation, so no further downtime. the slave will start with the state of the dump and catch up all changes made on the master. after you shut the master down and verify the slave is up-to-date = exact state of the master, you can enable writing/working with the slave/new master. – rogerovo Jul 24 '15 at 10:18
  • yes, but the problem is with the dump, that can take hours - that was the primary concert of my question, how to slowly, without great downtime move the entire database – Wapac Jul 24 '15 at 12:06
  • from a server perspective you can't. you must solve that in the application then, which knows what's going on and can safely manage/copy the data. – rogerovo Jul 24 '15 at 12:33
  • 1
    `--single-transaction` is safe to use even in the presence of MyISAM. It will transparently downgrade to `--lock-tables` as needed, and do `--single-transaction` for the InnoDB ones. Though, honestly, if you're running MyISAM, you should take this opportunity to switch to InnoDB, anyway. One caveat with all of this: note that the `mysql` database has some funny bits in it, in particular things like permissions. Be careful that you don't lock yourself out in the course of importing the old host's data! You may also have some trouble if you're switching versions (e.g. 5.0 to 5.6). – BMDan Aug 18 '15 at 01:29
0

Sure you can! The only thing you have to avare that your mysql servers has to 'see' eachother, and your network bandwidth should be enough for replicating all your changes. You should install percona-xtrabackup, take a backup from your current running mysql instance, copy all the files to the other hosting's mysql server's datadir, bring up instance, set up replication (change master to ...) and start slave. The slave will replicate all the missing data. At this point you should be aware of a brief downtime, make master read only (set global read_only=1) and after there are no writes, point your application to the slave. When the new writes appear on slave, stop replication (STOP SLAVE, RESET SLAVE) and you can shut down the old mysql instance.

banyek
  • 399
  • 1
  • 10
-1

For migration purposes I would not use replication.

I have done this before via:

  • use rsync to copy /var/lib/mysql/ from the current system to the new one; the current system can continue operation as usual. Do this a day or so before the planned migration.
  • just before the planned migration, do the rsync again
  • now bring the system down (stop the database server), and run the rsync again. This should not take much time due to the relatively low number of updates to be made
  • start the new MySQL server and begin to use it.

Of course you have to take into account that the client application must use the new server, so take into account DNS TTL (caching) if you use DNS names, otherwise make sure that the right IP address is used; but that's outside the scope of this question.

wurtel
  • 3,864
  • 12
  • 15
  • rsyncing database files is asking for trouble – rogerovo Jul 23 '15 at 10:37
  • I've done it dozens of times with databases of sizes up to 100GB, it's much faster than dump / restore and if the source's server is stopped (or has a READ LOCK) then there is no problem at all. If you have other experiences please elaborate; probably you just didn't know exactly what you were doing. – wurtel Jul 24 '15 at 11:44
  • Actually xtrabackup is much better and you don't need to stop mysql server for it. – Navern Aug 17 '15 at 21:21