34

In my amazon EC2 instance, I have a folder named uploads. In this folder I have 1000 images. Now I want to copy all images to my new S3 bucket. How can I do this?

GabLeRoux
  • 16,715
  • 16
  • 63
  • 81
Pacts Ramun
  • 381
  • 1
  • 3
  • 8

7 Answers7

43

First Option sm3cmd

Use s3cmd

s3cmd get s3://AWS_S3_Bucket/dir/file

Take a look at this s3cmd documentation

if you are on linux, run this on the command line:

sudo apt-get install s3cmd

or Centos, Fedore.

yum install s3cmd

Example of usage:

s3cmd put my.file s3://pactsRamun/folderExample/fileExample

Second Option

Using Cli from amazon

Update

Like @tedder42 said in the comments, instead of using cp, use sync.

Take a look at the following syntax:

aws s3 sync <source> <target> [--options]

Example:

aws s3 sync . s3://my-bucket/MyFolder

More information and examples available at Managing Objects Using High-Level s3 Commands with the AWS Command Line Interface

GabLeRoux
  • 16,715
  • 16
  • 63
  • 81
Ethaan
  • 11,291
  • 5
  • 35
  • 45
  • s3cmd is *very* dated at this point. The aws-cli has been out for quite a while now. I'd suggest using `sync` instead of `cp`. – tedder42 Feb 09 '15 at 02:27
  • @tedder42 check update answer what do you think? thanks for the advice – Ethaan Feb 09 '15 at 02:42
  • s3cmd is not dated (at least not now), last version, 1.6.1 is from 21 Jan 2016 and the developer keeps adding features and bugfixes. check http://s3tools.org/news – higuita Apr 20 '16 at 09:49
  • I had always used awscli with a very low failure rate, and this is the first time I tried s3cmd and the `put` failed with `[Errno 32] Broken pipe` continuously and kept trying at lower transfer rate and finally gave up. Somewhere on SO I read that the problem is due to filesize (I tried copying a moderately huge file ~7Gib). As always, awscli was able to handle this. Hope someone finds this useful if they are stuck with this error. – Fr0zenFyr Jun 21 '16 at 11:49
15
aws s3 sync your-dir-name s3://your-s3-bucket-name/folder-name
  • Important: This will copy each item in your named directory into the s3 bucket folder you selected. This will not copy your directory as a whole.

Or, you can use the following command for one selected file.

aws s3 sync your-dir-name/file-name s3://your-s3-bucket-name/folder-name/file-name

Or you can use a wild character to select all. Note that this will copy your directory as a whole and also generate metadata and save them to your s3 bucket folder.

aws s3 sync . s3://your-s3-bucket-name/folder-name
Gopal Chitalia
  • 430
  • 4
  • 18
zafrin
  • 434
  • 4
  • 11
5

To copy from EC2 to S3 use the below code in the Command line of EC2.

First, you have to give "IAM Role with full s3 Access" to your EC2 instance.

aws s3 cp Your_Ec2_Folder s3://Your_S3_bucket/Your_folder --recursive
Niraj
  • 89
  • 1
  • 5
2

Also note on aws cli syncing with s3 it is multithreaded and uploads multiple parts of a file at one time. The number of threads however, is not configurable at this time.

Tim Johnson
  • 81
  • 3
  • 8
1
aws s3 mv /home/inbound/ s3://test/ --recursive --region us-west-2
Machavity
  • 30,841
  • 27
  • 92
  • 100
raju
  • 129
  • 1
  • 9
  • 2
    Thank you for this code snippet, which might provide some limited, immediate help. A [proper explanation would greatly improve its long-term value](//meta.stackexchange.com/q/114762/206345) by showing _why_ this is a good solution to the problem, and would make it more useful to future readers with other, similar questions. Please [edit] your answer to add some explanation, including the assumptions you've made. – Machavity Apr 04 '18 at 19:44
1

This can be done very simply. Follow the following steps:

  • Open the AWS EC2 on console.
  • Select the instance and navigate to actions.
  • Select instances settings and select Attach/Replace IAM Role
  • When this is done, connect to the AWS instance and the rest will be done via the following CLI commands:

aws s3 cp filelocation/filename s3://bucketname

Hence you don't need to install or do any extra efforts.

Please note... the file location refers to the local address. And the bucketname is the name of your bucket. Also note: This is possible if your instance and S3 bucket are in the same account. Cheers.

DarkZeus
  • 59
  • 1
0

We do have a dryrun feature available for testing.

  • To begin with I would assign ec2-instance a role to be able read write to S3
  • SSH into the instance and perform the following
  • vi tmp1.txt
  • aws s3 mv ./ s3://bucketname-bucketurl.com/ --dryrun
  • If this works then all you have to do is either create a script to upload all files with specific from this folder to s3 bucket
  • I have done the wrritten the following command in my script to move files older than 2 minutes from current directory to bucket/folder
  • cd dir; ls . -rt | xargs -I FILES find FILES -maxdepth 1 -name '*.txt' -mmin +2 -exec aws s3 mv '{}' s3://bucketurl.com
AKV
  • 171
  • 1
  • 4
  • 12