2

A scheduled backup using mysqldump on one of our databases is causing Too many connections. The database is of both InnoDB and MyISAM tables with size of around 500Mb. The Too many connections appears for about 2-3 minutes

We understand that mysqldump locks the tables and causes all other queries and connections to pile up and jam the mysql server.

We need frequent backups and we cannot afford server downtime or putting websites in maintenance mode while doing it. Our websites are global and traffic is high all the time so its hard to find a moment for backups.

How can we avoid downtime during backups?
Is there maybe a way to use mysqldump in way that it will not lock all tables at the same time?
Is there an alternative to backing up with mysqldump?

vbachev
  • 23
  • 1
  • 3
  • 2-3 minutes for 500MB sounds like you are dumping directly over the network. You might get a shorter lock time if you dump to your local disk(s) and then transfer over the network after the tables are unlocked. You might also get *worse* lock times if you are using slow disks or the contention causes lots of little delays. @faker's suggestion of XtraBackup will only lock for the MyISAM tables which will also help. – Ladadadada Jun 13 '12 at 13:07
  • And XtraBackup will only get a write lock on MyISAM tables instead of a read lock like mysqldump uses (IIRC) – faker Jun 13 '12 at 13:18

4 Answers4

5

1) Setup a second server, replicate your DB to it, backup from there
This might not be a good solution if you have lots of changes on your master server, since replication on the slave is single threaded.

2) Use http://www.percona.com/doc/percona-xtrabackup/

Or as always, consult the official documentation which outlines other alternatives:
http://dev.mysql.com/doc/refman/5.1/en/backup-methods.html

faker
  • 17,496
  • 2
  • 60
  • 70
  • Thank you, faker! We will research and test xtrabackup as you suggested. A second server would not give us the best results since the data is very dynamic and changes often. – vbachev Jun 14 '12 at 10:06
3

At first, try to backup table by table or DB by DB - do not lock everything.

Increase max_connections, try to use connection limit otherwise: http://dev.mysql.com/doc/refman/5.5/en/user-resources.html

I'm not sure it will take too much of memory for increased max_connections.

If you are using MyISAM - there is no other way for backup. With InnoDB or XTraDB - no problems. http://www.innodb.com/doc/hot_backup/manual.html

GioMac
  • 4,544
  • 4
  • 27
  • 41
  • Xtrabackup, mysqlhotcopy, MySQL Enterprise Backup and file system snapshots are all possible while using MyISAM – faker Jun 13 '12 at 16:10
  • Thank you, GioMac, we did some experiments with max_connections and it seems its a good solution although maybe just a temporary one. DB by Db and Xtrabackup are our next candidates – vbachev Jun 14 '12 at 10:12
  • `SET GLOBAL max_connections = 2500;` seems to have helped in my case. – Íhor Mé Jul 22 '19 at 11:38
0

Is there an alternative to backing up with mysqldump?

How do you plan on using them? If you want quick and cheap incremental backups, I'd just use rsync (or rsnapshot which uses rsync to keep multiple concurrent copies without occupying too much extra disk space) to backup the /var/lib/mysql (or wherever the datafiles may be).

I am assuming you won't really use them to replicate/parse or otherwise interpret, but just in case your server(s) fail you want to do (essentially) a full-disk restore. This also makes such a restore incredibly fast (but rigid, you restore everything or nothing really) since its a standard copy operation. At best, you would use this as often as you want, with another backup mechanism (say a mysqldump at a off-peak time) to ensure you're never without an updated parsable dump.

Jay
  • 6,544
  • 25
  • 34
  • 3
    If MySQL is running while you rsync the files, you might end up with a corrupted backup – faker Jun 13 '12 at 11:49
  • A snapshot-aware filesystem is a much better way to manage this sort of backup. You can lock all tables, create the snapshot and unlock in a few seconds. – Ladadadada Jun 13 '12 at 13:00
0

Your mix of myisam and innodb-tables complicates matters somewhat. If you can convert the myisam-tables to innodb-tables you could run mysqldump inside a transaction. This way you would get a consistent backup without the need for a table/database lock.

Problems may be:

  • Performance: Innodb could be slower than myisam for you application
  • Features: If you use features only present with myisam