15

I maintain big MySQL database. I need to backup it every night, but the DB is active all the time. There are queries from users. Now I just disable the website and then do a backup, but this is very bad as the service is disabled and users don't like this.

What is a good way to backup the data if data is changed during the backup?

What is best practice for this?

Vapid
  • 701
  • 7
  • 27
Roman Gelembjuk
  • 1,797
  • 2
  • 25
  • 50

4 Answers4

19

I've implemented this scheme using a read-only replication slave of my database server.

MySQL Database Replication is pretty easy to set up and monitor. You can set it up to get all changes made to your production database, then take it off-line nightly to make a backup.

The Replication Slave server can be brought up as read-only to ensure that no changes can be made to it directly.

There are other ways of doing this that don't require the replication slave, but in my experience that was a pretty solid way of solving this problem.

Here's a link to the docs on MySQL Replication.

Kevin Bedell
  • 13,254
  • 10
  • 78
  • 114
  • 4
    +1 Backing up from a replicated database is the standard way. – Marcus Adams May 08 '12 at 12:41
  • 3
    It is interesting. What happens when you make the slave online after a backup? Does MySQL start to copy latest changes to it? Is this big loading to the server when mysql does this? – Roman Gelembjuk May 08 '12 at 13:08
  • 7
    Replication is handled by a single thread on the master server sending transactions one at a time to the slave where they are then applied. Once you bring the slave back on-line it catches up in a way that doesn't put a significant load on the master. It uses a single database connection. Load on the master is not an issue. I did this on a large database with almost 1TB of data in it. – Kevin Bedell May 08 '12 at 13:15
7

If you have a really large (50G+ like me) MySQL MyISAM only databases, you can use locks and rsync. According to MySQL documentation you can safely copy raw files while read lock is active and you cannot do it with InnoDB. So if the goal is zero downtime and you have extra HD space, create a script:

rsync -aP --delete /var/lib/mysql/* /tmp/mysql/sync

Then do the following:

  1. Do flush tables
  2. Run script
  3. Do flush tables with read lock;
  4. Run script again
  5. Do unlock tables;

On first run rsync will copy a lot without stopping MySQL. The second run will be very short, it will only delay write queries, so it is a real zero downtime solution.

  1. Do another rsync from /tmp/mysql/sync to a remote server, compress, keep incremental versions, anything you like.
sekrett
  • 1,205
  • 1
  • 15
  • 17
  • 1
    Interesting solution. Can you confirm that restoring from these backups works? (Have you had to use your backups to recover in an emergency? Did you encounter any problems or surprises during restoration?) – rinogo Dec 28 '16 at 00:22
  • This is not a backup, but a real exact raw copy of MySQL data, so no surprise. I used this method when migrating from one server to another. – sekrett Dec 28 '16 at 14:59
2

This partly depends upon whether you use innodb or myiasm. For innodb; mySQL have their own (which costs money) solution for this (innodb hot copy) but there is an open source version from Percona you may want to look at:

http://www.percona.com/doc/percona-xtrabackup/

Christopher Padfield
  • 1,249
  • 2
  • 12
  • 18
0

What you want to do is called "online backup". Here's a pointer to a matrix of possible options with more information:

http://www.zmanda.com/blogs/?p=19

It essentially boils down to the storage backend that you are using and how much hardware you have available.

Mario
  • 1,801
  • 3
  • 20
  • 32