I want to transfer a multi-terabyte directory to an nfs mounted directory most efficiently over a 1Gbit network (probably the limiting factor)
3 options -
- tar and compress in place, then copy
- copy, then tar and compress
- tar | compress
Seems obvious to me that #3 should be most efficient since I'm only reading and writing the data one time. Unfortunately, my command (tar -c dir | pigz > /mnt/nfs/dir.tgz ) seems to tar for a while, then zip for a while, then tar for a while... and the network goes idle for large chunks of time, then the cpu is idle.
Did i miss some option?
P.S. My question seems related to this question but that has no answer and doesn't really ask the precise question about alternation between network and cpu saturation.