I just found my box has 5% for HDD hard drive left and I have like almost 250GB of mysql bin file that I want to send to s3. We have moved from mysql to NoSQL and not currently using mysql. However I would love to preserve old data before migration.
Problem is I can't just tar the files in a loop before sending them there. So I was thinking I could gzip on the fly before sending so it doesn't store the compressed file on HDD.
for i in * ; do cat i | gzip -9c | s3cmd put - s3://mybudcket/mybackups/$i.gz; done
To test this command, I run it without the loop and it didn't send anything but didn't complain about anything either. Is there anyway of achieving that?
OS is ubuntu 12.04 s3cmd version is 1.0.0
Thank you for your suggestions.