I need to copy snapshots of virtual machines running on Proxmox (KVM) servers and copy the snapshots to offsite storage.
Most snapshots are a few gb, but some are rather large, up to 200gb. I would prefer to compress and copy the snapshots in one go, something like:
xz -c dumpfile | scp offsite
The problem is due to the size of the dumpfile and other jobs that needs to run at the sime time this may take 12 hours or more, during which time a lot can happen to interrupt the data stream.
Using rsync I could resume the transfer if it was interrupted, but would then have to compress the whole file before sending it, requiring another 150g on top of the 250g allocated to the snapshot. As storage on the server is at a premium (ssd only) I'd rather prefer not having to allocate that extra disk space.
Perhaps splitting the compressed output out in smaller pieces pkzip style, and transferring those in a queue as they are ready, could be a solution? Tar seems to have an multivolume (-m) option that perhaps could be used. Problem is that the compression process would need to be stopped until the last compressed part had been transferred.
I'm looking for ideas here, not really a concrete solution. I feel i've missed some obvious option. Would prefer using standard Linux software where possible.