The "Lost Connection" problem is caused by too much data through the dumping channel - you need to increase max_allowed_packet
in my.cnf.
I had this pointed out to me a long time ago, but now rereading the docs I'm not too sure it applies. You might try it, anyway, or use it for further research:
Note that mysqldump
will build special multiple-INSERT
clauses which might exceed the buffer length. You may then need to resort to --skip-extended-insert
(which will greatly slow down restore) when backing up.
The docs say:
net_buffer_length
The initial size of the buffer for client/server communication. When creating multiple-row INSERT statements (as with the --extended-insert or --opt option), mysqldump creates rows up to net_buffer_length bytes long. If you increase this variable, ensure that the MySQL server net_buffer_length system variable has a value at least this large.
[...]
max_allowed_packet
The maximum size of one packet or any generated/intermediate string, or any parameter sent by the mysql_stmt_send_long_data() C API function. The default is 4MB.
The packet message buffer is initialized to net_buffer_length bytes, but can grow up to max_allowed_packet bytes when needed.
However, backing up will lock the database (or the tables) and cause service problems. So you might be better off by, first, setting up replication, so that the slave is kept in a consistent state by continuous updates from its master. When you need to backup, you lock the slave and set it to read only, then can either run a mysqldump or just back up the binary data (which the manual recommends for larger databases).