1

I have a nightly script on my Ubuntu 10.04 VPS which uses duplicity (0.6.24) to run incremental, encrypted backups to Amazon S3. This script had been working until a month or so ago when it started failing with errors like the following:

Upload 's3://s3.amazonaws.com/{BUCKET}/duplicity-full.20140519T222412Z.vol6.difftar.gpg' failed (attempt #5, reason: error: [Errno 105] No buffer space available)
Giving up trying to upload s3://s3.amazonaws.com/{BUCKET}/duplicity-full.20140519T222412Z.vol6.difftar.gpg after 5 attempts
Backend error detail: Traceback (most recent call last):
  File "/usr/local/bin/duplicity", line 1502, in <module>
    with_tempdir(main)
  File "/usr/local/bin/duplicity", line 1496, in with_tempdir
    fn()
  File "/usr/local/bin/duplicity", line 1345, in main
    do_backup(action)
  File "/usr/local/bin/duplicity", line 1466, in do_backup
    full_backup(col_stats)
  File "/usr/local/bin/duplicity", line 538, in full_backup
    globals.backend)
  File "/usr/local/bin/duplicity", line 420, in write_multivol
    (tdp, dest_filename, vol_num)))
  File "/usr/local/lib/python2.6/dist-packages/duplicity/asyncscheduler.py", line 145, in schedule_task
    return self.__run_synchronously(fn, params)
  File "/usr/local/lib/python2.6/dist-packages/duplicity/asyncscheduler.py", line 171, in __run_synchronously
    ret = fn(*params)
  File "/usr/local/bin/duplicity", line 419, in <lambda>
    async_waiters.append(io_scheduler.schedule_task(lambda tdp, dest_filename, vol_num: put(tdp, dest_filename, vol_num),
  File "/usr/local/bin/duplicity", line 310, in put
    backend.put(tdp, dest_filename)
  File "/usr/local/lib/python2.6/dist-packages/duplicity/backends/_boto_single.py", line 266, in put
    raise BackendException("Error uploading %s/%s" % (self.straight_url, remote_filename))
BackendException: Error uploading s3://s3.amazonaws.com/{BUCKET}/duplicity-full.20140519T222412Z.vol6.difftar.gpg

It seems to be able to upload several duplicity volumes before the error occurs, and if I run the backup script again it picks up where it left off, so I can complete the backup, but I have to keep restarting the script until it gets through the 30 volumes.

The duplicity command I'm using is:

duplicity --full-if-older-than 1M \
      --encrypt-key={KEY} \
      --sign-key={KEY} \
      --exclude={PATH}
      {PATH} \
      s3://s3.amazonaws.com/{BUCKET} -v8

How can I prevent this error?

Greg
  • 239
  • 1
  • 12

0 Answers0