1

I need to move a very large database (~320 GB) from server1 to server2 (Linux). Because of different extension versions, the database can only be restored on server2 from a dump file as described here.

The problem is I don't have enough space on server1 to first write a dump file there, then copy it to server2 and verify sums. I need a reliable way to write the data directly to server2, minimizing the risk of corruption.

I tried:

  • Piping the dump from server1 to server2 using nc.
  • Writing a dump file directly to a server2 filesystem which is mounted on server1 using sshfs.

Both dump files appear to have been corrupted (substantially different size, and errors related to corruption at different stages of the import).

I migrated databases like this (but much smaller) without problems. Can anyone suggest a better, more reliable way to do this large transfer?

UPDATE: Tried NFS with the same results. Clearly remote filesystems can't deal with this volume of data. Blocks are visibly missing from the resulting SQL file, causing syntax errors during import. Different parts of the file are corrupted each time I try.

kontextify
  • 189
  • 1
  • 10

4 Answers4

0

What about mounting an external NFS share first on sever1 and mounting to server2 after the dump? That's the way I perform Oracle databases datapump and RMAN backups (similar to your dumps) with best results duplicating production server (server1) to test server (server2) and restoring backups. Our external NFS relies on a NAS but any Gnu/Linux will do the job, just install necessary packages

Follow these steps for Debian and after setting up the share mount it on server1 and server2 via /etc/fstab

nfs_server:/path/to/share /server1or2filesystem/mountpoint nfs rw,hard,intr,timeo=600,actimeo=0,proto=tcp,bg,rsize=262144,wsize=262144 0 0

I never had corruption issues with dumps on NFS shares (unless a connectivity failure occurred) and if you find corruption again testing NFS you could try a local partial dump in order to discard missconfigurations on the dump tool if that local dump becomes corrupted as well.

  • Great tip! Same results with NFS though, updating my question. Looks like you just can't write 320 GB to a remote filesystem without a few bytes getting lost or messed up. – kontextify Aug 13 '15 at 11:18
0

Try AFS (Andrew FS). This file system is per se designed to be scalable and network-oriented and could provide a good solution to your problem. In case of Ubuntu use OpenAFS. Keep in mind though that it will require some configuration and setting up, before you establish a reliable connection.

Konrad Gajewski
  • 1,518
  • 3
  • 15
  • 29
0

you can try and directly import it via this oneliner:

mysqldump --add-drop-table --extended-insert --force --log-error=error.log -uUSER -pPASS OLD_DB_NAME | ssh -C user@newhost "mysql -uUSER -pPASS NEW_DB_NAME"

but not being able to copy reliable over network. Are you sure there is no networking or filesystem issues?

0

I got the best answer here. The best way to do this is to create the dump file directly on the receiving server by dumping the source database remotely:

pg_dump -h remoteserver -U remoteuser remotedbname -Fc -f my_old_server_backup.dump

This is way more reliable than writing a huge SQL file over a network filesystem. I now have a functioning, migrated database!

kontextify
  • 189
  • 1
  • 10