1

I'm trying to back up my entire collection of over 1000 work files, mainly text but also pictures, and a few large (0.5-1G) audiorecordings, to an S3 cloud (Dreamhost DreamObjects). I have tried to use boto-rsync to perform the first full 'put' with this:

 $ boto-rsync --endpoint objects.dreamhost.com /media/Storage/Work/ \ 
 > s3:/work.personalsite.net/ > output.txt

where '/media/Storage/Work/' is on a local hard disk, 's3:/work.personalsite.net/' is a bucket named after my personal web site for uniqueness, and output.txt is where I wanted a list of the files uploaded and error messages to go.

Boto-rsync grinds its way through the whole dirtree, but refreshing output about each file's progress doesn't look so good when it's printed in a file. Still as the upload is going, I 'tail output.txt' and I see that most files are uploaded, but some are only uploaded to less than 100%, and some are skipped altogether. My questions are:

  • Is there any way to confirm that a transfer is 100% complete and correct?
  • Is there a good way to log the results and errors of a transfer?
  • Is there a good way transfer a large number of files in a big directory hierarchy to one or more buckets for the first time, as opposed to an incremental backup?

I am on a Ubuntu 12.04 running Python 2.7.3. Thank you for your help.

Ryan Schram
  • 129
  • 1
  • 1
  • 7
  • Did you check to see that the files are actually only partly written to s3? Was there any error message (most likely written to stderr, so it wouldn't make it through to output.txt). I recently made some changes to boto-rsync. I'd gladly share them if I see it can help you with the problems you're facing. – shx2 Feb 11 '14 at 19:46
  • Thanks, I think my main problem was that I was transferring large media files over an unreliable connection. When I'm connected by Ethernet cable to my home network, it works much better and I can leave it alone during the sync and see no errors. Now that my archive is set up, syncing my working files with boto-rsync is easy and fast. – Ryan Schram Feb 12 '14 at 00:15

1 Answers1

0

you can encapsulate the command in an script and starts over nohup:

nohup script.sh

nohup generates automaticaly nohup.out file where all the output aof the script/command are captured.

to appoint the log you can do:

nohup script.sh > /path/to/log

br

Eddi

Karthick Kumar
  • 2,349
  • 1
  • 17
  • 30