0

I have an s3 bucket with millions of files copied there by a Java Process I do not control. Java Process is executed in a EC2 in "AWS Account A" but writes to a bucket owned by "AWS Account B". B was able to see the files but not to open them.

I figured out what was the problem and requested a change in Java Process to write new files with "acl = bucket-owner-full-control"... and it works! New files can be read from "AWS Account B".

But my problem is that I still have millions of files with incorrect acl. I can fix one of the old files easily with

aws s3api put-object-acl --bucket bucketFromAWSAccountA--key datastore/file0000001.txt --acl bucket-owner-full-control

What is the best way to do that? I was thinking in something like

# Copy to TEMP folder
aws s3 sync s3://bucketFromAWSAccountA/datastore/ s3://bucketFromAWSAccountA/datastoreTEMP/ --recursive --acl bucket-owner-full-control
# Delete original store
aws s3 rm s3://bucketFromAWSAccountA/datastore/
# Sync it back to original folder
aws s3 sync s3://bucketFromAWSAccountA/datastoreTEMP/ s3://bucketFromAWSAccountA/datastore/ --recursive --acl bucket-owner-full-control

But it is going to be very time consuming. I wonder if...

  • is there a better way to achieve this?
  • Could this be achieved easily from bucket level? I mean some change "put-bucket-acl" that allows owner to ready everything?
Oscar Foley
  • 6,817
  • 8
  • 57
  • 90

1 Answers1

2

One option seems to be to recursively copy all objects in the bucket over themselves, specifying the ACL change to make.

Something like: aws s3 cp --recursive --acl bucket-owner-full-control s3://bucket/folder s3://bucket/folder --metadata-directive REPLACE

That code snippet was taken from this answer: https://stackoverflow.com/a/63804619

It is worth reviewing the other options presented in answers to that question, as it looks like there is a possibility for losing content-type tags or metadata information if you don't form the command properly.

David Sette
  • 733
  • 1
  • 7
  • 13