35

I have some files that I want to copy to s3. Rather than doing one call per file, I want to include them all in one single call (to be as efficient as possible).

However, I only seem to get it to work if I add the --recursive flag, which makes it look in all children directories (all files I want are in the current directory only)

so this is the command I have now, that works

aws s3 cp --dryrun . mybucket --recursive --exclude * --include *.jpg

but ideally I would like to remove the --recursive to stop it traversing, e.g. something like this (which does not work)

aws s3 cp --dryrun . mybucket --exclude * --include *.jpg

(I have simplified the example, in my script I have several different include patterns)

Anthony Neace
  • 25,013
  • 7
  • 114
  • 129
Susan D. Taylor
  • 669
  • 2
  • 7
  • 17

4 Answers4

53

AWS CLI's S3 wildcard support is a bit primitive, but you could use multiple --exclude options to accomplish this. Note: the order of includes and excludes is important.

aws s3 cp --dryrun . s3://mybucket --recursive --exclude "*" --include "*.jpg" --exclude "*/*"
mfisherca
  • 2,399
  • 22
  • 22
6

Try the command:

aws s3 cp --dryrun . s3://mybucket --recursive --exclude "*/"

Hope it help.

Lewis Hai
  • 1,114
  • 10
  • 22
M. Dellwo
  • 61
  • 1
  • 1
2

I tried the suggested answers and could not get aws to skip nested folders. Saw some weird outputs about calculating size, and 0 size objects, despite using the exclude flag.

I eventually gave up on the --recursive flag and used bash to perform a single s3 upload for each file matched. Remove --dryrun once you're ready to roll!

for i in *.{jpg,jpeg}; do aws --dryrun s3 cp ${i} s3://your-bucket/your-folder/${i}; done
Grey Vugrin
  • 455
  • 5
  • 12
1

I would suggest to go for a utility called s4cmd which provides us unix like file system operations and it also allows us to include the wild cards https://github.com/bloomreach/s4cmd