How can I backup a large database in mysql? Folder size of Database is more than 5GB & contains 1.5 Lakh tables.
I have tried to backup server through phpmyadmin but no luck.
How can I backup a large database in mysql? Folder size of Database is more than 5GB & contains 1.5 Lakh tables.
I have tried to backup server through phpmyadmin but no luck.
At a 5GB data size, you almost surely cannot and should not use a dump (myslqdump, phpmyadmin, etc) as a backup. The reason isn't the backup. The reason is because the restore will take a long time, potentially many days depending on the table structure and your server hardware. You need some type of file backup. Whether you use LVM snapshot or Percona XtraBackup or rsync or something else will depend on the storage engine you use, the hardware, filesystem, and a number of other factors. This is much too complex of a question to answer without a lot more detail, but I can simplify as follows:
Check out MySQL Binary Log. It can be used to setup incremental backups. You can also use mysqldump, but if the database is too big, it will probably suppose some downtime to perform a full backup.
For large, active databases, it can be worth creating a replication slave that can be taken offline for full backups.
Backups can then be done by stopping replication and running mysqldump or stopping mysqld and copying the physical folders to a backup location. http://www.howtoforge.com/back_up_mysql_dbs_without_interruptions
Either way, it probably will be best to do this directly, and not through phpMyAdmin as the dump file is going to be too big for your browser to handle effectively.
For more reading go here once mysql.com is back online http://dev.mysql.com/doc/refman/5.1/en/backup-methods.html
We use Percona XtraBackup. It will do non-blocking backups of large databases. It's reasonably fast. And has been very reliable. It creates a usable copy of the database. We backup to a separate partition, the data can be copied from there to the data partition, or we can swap the partitions to use the backup directly. And it's free.
One of my systems has data that changes in a way where it doesn't make sense to use the binlogs as backups. The ibdata file there is 131GB. For that, I use the following:
/usr/bin/mysqldump -u $1$MYSQLPASSWORD -f --opt --single-transaction --all-databases | lzma -3
Bacula then picks up the resulting compressed file and writes it to the system backup, same as any other file.
Note that when you say "folder size", you imply that you're not using InnoDB. That might make a consistent snapshot more of a challenge.
Some other common methods include snapshot + binlog incrementals, filesystem snapshots, and spare slave servers. There are also the Percona tools that allow hot backup reads directly from an InnoDB file. Which solution works best for you depends a lot upon system load.