I have an s3 bucket with millions of files copied there by a Java Process I do not control. Java Process is executed in a EC2 in "AWS Account A" but writes to a bucket owned by "AWS Account B". B was able to see the files but not to open them.
I figured out what was the problem and requested a change in Java Process to write new files with "acl = bucket-owner-full-control"... and it works! New files can be read from "AWS Account B".
But my problem is that I still have millions of files with incorrect acl. I can fix one of the old files easily with
aws s3api put-object-acl --bucket bucketFromAWSAccountA--key datastore/file0000001.txt --acl bucket-owner-full-control
What is the best way to do that? I was thinking in something like
# Copy to TEMP folder
aws s3 sync s3://bucketFromAWSAccountA/datastore/ s3://bucketFromAWSAccountA/datastoreTEMP/ --recursive --acl bucket-owner-full-control
# Delete original store
aws s3 rm s3://bucketFromAWSAccountA/datastore/
# Sync it back to original folder
aws s3 sync s3://bucketFromAWSAccountA/datastoreTEMP/ s3://bucketFromAWSAccountA/datastore/ --recursive --acl bucket-owner-full-control
But it is going to be very time consuming. I wonder if...
- is there a better way to achieve this?
- Could this be achieved easily from bucket level? I mean some change "put-bucket-acl" that allows owner to ready everything?