I need to load big text files from another machine through a 10Gb link, files been creating by external/closed-source software that i don't have access to change (for example to make it to compress output files)
Currently network and disk IO usage is 100% on these machines (source and destination machines) so bottleneck of the system.
If the source machine could have compress text files in the first place there would be no problem at all, but i don't have access to source machine (files being rsync-ed to destination machine which i have access)
Is there anything at the destination machine that i can do to at least reduce the pressure on destination machine's disk.
I came up with a kind of ridiculous idea which is: to have a temp memory mapped drive with a few GBs, mount it as input directory of the rsync from source machine, then write a program to compress couple of text files each time, write the compressed output file (10X smaller size in compare to original files) to HDD and delete original files from memory mapped drive.
Is there a tool already doing this?! Any other recommendations?
I'm using ubuntu 18.04.
Best Regards