I need to move a very large database (~320 GB) from server1 to server2 (Linux). Because of different extension versions, the database can only be restored on server2 from a dump file as described here.
The problem is I don't have enough space on server1 to first write a dump file there, then copy it to server2 and verify sums. I need a reliable way to write the data directly to server2, minimizing the risk of corruption.
I tried:
- Piping the dump from server1 to server2 using
nc
. - Writing a dump file directly to a server2 filesystem which is mounted on server1 using
sshfs
.
Both dump files appear to have been corrupted (substantially different size, and errors related to corruption at different stages of the import).
I migrated databases like this (but much smaller) without problems. Can anyone suggest a better, more reliable way to do this large transfer?
UPDATE: Tried NFS with the same results. Clearly remote filesystems can't deal with this volume of data. Blocks are visibly missing from the resulting SQL file, causing syntax errors during import. Different parts of the file are corrupted each time I try.