1

The following code is working as expeceted and showing bucket name, file name. for e.g.

<Key: vivafree,Master.csv.2012-04-10-17-52-39.gz>
<Key: vivafree,Master.csv.2012-07-12-23-00-49.gz>

I need to download all these files and transfer them to Glacier vault.

from boto.s3.key import Key
from boto.s3.connection import S3Connection

AWS_ACCESS_KEY_ID="ABC"
AWS_SECRET_ACCESS_KEY="PQR+XYZ"

conn = S3Connection(AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY)

bucket = conn.get_bucket('vivafree')

ls=bucket.get_all_keys()

for file in ls:
 print file

The following code will copy a file and put it in a vault called "company_backup".

import boto.glacier.layer2

vaultName = "company_backup"
fileName = "email_usergroups_permissions.txt.gz"
l = boto.glacier.layer2.Layer2(aws_access_key_id=AWS_ACCESS_KEY_ID, aws_secret_access_key=AWS_SECRET_ACCESS_KEY)
v = l.get_vault(vaultName) 

archiveID = v.create_archive_from_file(fileName)

What I need to do is to loop through the files returned from the first code block. Download and then transfer them to glacier using the second code snippet. I will also need to save the archiveID for record purpose.

shantanuo
  • 31,689
  • 78
  • 245
  • 403
  • 4
    Not sure if it applies to your situation, but you can actually archive files from S3 to Glacier directly, without having to download/upload using lifecycles. See http://docs.pythonboto.org/en/latest/s3_tut.html#transitioning-objects-to-glacier for more info. – garnaat Apr 08 '13 at 16:29
  • Do you still need a solution for this? – Tall Paul Jul 02 '13 at 13:51
  • Thanks to gamaat comment that it is possible to directly transfer files from S3 to Glacier. The above code seems to be unnecessary. Closed. – shantanuo Jul 04 '13 at 02:01

0 Answers0