1

I have InnoDB databases and they are using the same ibdata file, the size of this ibdata is around 250G. now i have one of those DBs around 100G. My manager asked me to move this DB to another server and to use innodb_file_per_table.
now i can't leave the system off for a long time. so i need your advice please, which is faster:

  • mysqldump -uusername -pmypassword ––extended-insert ––disable-keys ––quick MyDb>Mydb.sql and then on the second server mysql -uusername -ppass mydb
  • OR to copy the ibdata file to my other server and go over all of the tables and do this:
    1. ALTER TABLE table_name DISABLE KEYS;
    2. ALTER TABLE table_name engine=myisam;
    3. ALTER TABLE table_name engine=innodb;
    4. ALTER TABLE table_name enable KEYS;

if you have better idea, please share it with us!
Thanks you for your help

Alaa Alomari
  • 638
  • 6
  • 19
  • 37

2 Answers2

1

Copying ibdata files around is fraught with danger. Important bits of metadata can be stored in other files such as ib_logfile0 and ib_logfile1 (this is not a comprehensive list) and the settings on the new server (such as the size of those two logfiles) needs to be exactly the same as the old server. If you get any of this stuff wrong or miss copying one of the files, the new server will not start and you will have to deal with MySQL's helpful error messages in the error log.

Xtrabackup handles all of this for you. After you use it to dump your data and "prepare" it, you can do a straight file copy to the new server and start it up with the new ibdata file. The "prepare" step takes significant time.

However, since you mentioned that the database you are moving is only 100GB of the 250GB ibdata file, I suspect it will be quicker using the mysqldump method simply because it only has to transfer 100GB of data instead of 250GB.

You can pipe the output of mysqldump straight into mysql to avoid having to save the dump to disk like this:

mysqldump -uusername -pmypassword MyDB | mysql -h server2 -uusername -pmypassword MyDB

The mysql client has a -C option that enables compression if both ends support it. Just put it in before the -h option. The transfer over the network is likely to be the slowest part of the whole operation. If your server doesn't support compression, you could do the transfer via ssh with the -C option instead:

mysqldump -uusername -pmypassword MyDB | ssh -C server2 "mysql -uusername -pmypassword MyDB"

Last thought: The time it has taken me to write this is probably longer than both methods put together unless you have a 10MB/s network. Get it started. :-)

Ladadadada
  • 26,337
  • 7
  • 59
  • 90
0

Copying the physical data files is always faster than exporting/importing the actual data; the former only requires disk I/O, while the latter also requires the data to be processed by the database engine.

Also, the exported data (in text format) usually are a lot bigger than the binary data files, and transferring them to the new server will require more time (and bandwidth, if you're doing that over the network).

Massimo
  • 70,200
  • 57
  • 200
  • 323
  • but the former requires me to alter table from engine to another twice,,, will this require less time than dumping/restoring the same table?? by the way the copying process from server to server will take less than half an hour, so my concern is which is faster alter table or restore from dump?? – Alaa Alomari Sep 26 '11 at 11:36