-1

Assume: On Linux server we have limited free space. The task is to create archive of a folder (for example /var/www), encrypt it and upload to remote location keeping original unencrypted file in place. Everything works fine for relatively small files, but once archive file reaches >50% of free space in size, there is not enough space to place encrypted file. The upload is done via SDK from provider and it is not possible to pipe output from gpg-agent directly to upload. All of above is done by self-written bash script.

Therefore actions step-by-step (with example numbers):

  1. Free storage in (for example) /opt/ is 100GB
  2. Compress /var/www as /opt/www.tar.gz (file size is 60GB)
  3. Somehow produce /opt/www.tar.gz.gpg file (it would need 60GB or more)
  4. Upload /opt/www.tar.gz.gpg using CLI tool
  5. After all above is done there should be left file /opt/www.tar.gz

Is there a solution to this problem?

Alexey Kamenskiy
  • 794
  • 1
  • 9
  • 23
  • Buy a hard drive? – Michael Hampton Sep 09 '15 at 09:33
  • @MichaelHampton very smart answer, but no, this is not an option, it is a VM on a remote location not managed by us. If adding extra space wasn't an issue there wouldn't be this question. – Alexey Kamenskiy Sep 09 '15 at 09:34
  • 1
    Get a better storage provider? Seriously, streaming uploads are not exactly rocket science. But, given that you've got 100GB of free storage, why not `tar czf - /var/www | gpg -e >/opt/www.tgz.gpg`? – womble Sep 09 '15 at 09:43
  • @womble getting another storage provider is also not an option. I mean if there were no such limitations, would I really have this question? As for piping into gpg -- it works to produce gpg file, not problem, but then after all is done I still should have unencrypted file left in place. How? – Alexey Kamenskiy Sep 09 '15 at 09:49
  • Depending on how you upload the .pgp file you might be able to mount the remote location (fptfs, sshfs, ...) and write the .pgp file directly on the remote site. Apart from that, there is not much you can do except to add disk space. – Gerald Schneider Sep 09 '15 at 09:54
  • Another option: smaller packages. Make one archive per dir in /var/www. Create archive, encrypt, upload, delete, next. – Gerald Schneider Sep 09 '15 at 09:59
  • You don't need 3 local copies of the same file, you could add space if you want to, you could compress/encrypt/send in the same command. When your self imposed constraints don't let you do the task, then your constraints are wrong. – JamesRyan Sep 09 '15 at 10:40

1 Answers1

1

If you can stop your server from changing /var/www for a while, you can create the gpg via a pipe:

tar cf - /var/www | gpg -e >/opt/www.tar.gz.gpg 
upload /opt/www.tar.gz.gpg
rm /opt/www.tar.gz.gpg 

then create the local copy

tar zcf /opt/www.tar.gz /var/www

Taking a tar of a live system isn't perfect anyway, so why not accept having 2 slightly different snapshots.


Here's another solution that splits the gpg upload into separate smaller files. You can then concatenate them on the server should you ever wish to exploit them. Change the bs and count to be slightly less than the free space you have available, or any convenient smaller size.

tar zcf /opt/www.tar.gz /var/www
gpg -e </opt/www.tar.gz |
( let i=1
  while dd iflag=fullblock bs=10240 count=100 >/opt/part.$i
        [ -s /opt/part.$i ]
  do   echo upload /opt/part.$i
       rm /opt/part.$i
       let i=i+1
  done
)

I tested this as follows and checked the result was ok:

tar czf /tmp/tar.gz ~/bin/
>/tmp/b
gpg --compress-level=0 -e </tmp/tar.gz |
( let i=1
  while dd iflag=fullblock bs=10240 count=100 >/tmp/a$i
        [ -s /tmp/a$i ]
  do   echo upload /tmp/a$i
       cat /tmp/a$i >>/tmp/b
       rm /tmp/a$i
       let i=i+1
  done
)
gpg -d </tmp/b >/tmp/c
cmp /tmp/c /tmp/tar.gz 
meuh
  • 1,563
  • 10
  • 11