60

I am trying to give myself permission to download existing files in an S3 bucket. I've modified the Bucket Policy, as follows:

        {
        "Sid": "someSID",
        "Action": "s3:*",
        "Effect": "Allow",
        "Resource": "arn:aws:s3:::bucketname/AWSLogs/123123123123/*",
        "Principal": {
            "AWS": [
                "arn:aws:iam::123123123123:user/myuid"
            ]
        }
    }

My understanding is that addition to the policy should give me full rights to "bucketname" for my account "myuid", including all files that are already in that bucket. However, I'm still getting Access Denied errors when I try to download any of those files via the link that comes up in the console.

Any thoughts?

David
  • 2,550
  • 7
  • 27
  • 29
  • You say that this gives full rights to the bucket, but your Resource includes a prefix. Are all the files you are downloading in this prefix? Also, how are you downloading them? From the console, with an app, with an SDK? – Bob Kinney Feb 28 '14 at 00:20
  • please refer https://stackoverflow.com/questions/26533245/the-authorization-mechanism-you-have-provided-is-not-supported-please-use-aws4/74747591#74747591 – Harat Dec 09 '22 at 19:39

12 Answers12

48

Step 1

Click on your bucket name, and under the permissions tab, make sure that Block new public bucket policies is unchecked

enter image description here

Step 2

Then you can apply your bucket policy enter image description here

Hope that helps

robd
  • 9,646
  • 5
  • 40
  • 59
accimeesterlin
  • 4,528
  • 2
  • 25
  • 18
  • 3
    I was just missing the policy. I added the one like yours and got it working. I want my bucket to be public to view. Is this the correct configuration for that? – Jamshaid Apr 15 '21 at 10:37
22

David, You are right but I found that, in addition to what bennie said below, you also have to grant view (or whatever access you want) to 'Authenticated Users'. enter image description here

But a better solution might be to edit the user's policy to just grant access to the bucket:

{
   "Statement": [
    {
      "Sid": "Stmt1350703615347",
      "Action": [
        "s3:*"
      ],
      "Effect": "Allow",
      "Resource": [
        "arn:aws:s3:::mybucket/*"
      ]
    },
    {
      "Effect": "Allow",
      "Action": [
        "s3:ListBucket"
      ],
      "Resource": ["arn:aws:s3:::mybucket"],
      "Condition": {}
    }
  ]
}

The first block grants all S3 permissions to all elements within the bucket. The second block grants list permission on the bucket itself.

Community
  • 1
  • 1
Karl Wilbur
  • 5,898
  • 3
  • 44
  • 54
  • 1
    Granting that access to "authenticated users" literally means any authenticated aws user, even ones not associated with your own account... so it nullifies the point granting access to a user to access a bucket. – Salami Mar 03 '16 at 18:24
  • I know what it does and I agree that it seems strange but, at the time, it was *required* to make it work. – Karl Wilbur Mar 07 '16 at 21:05
  • 1
    Another way to do this is to attach a policy to the specific IAM user - in the IAM console, select a user, select the **Permissions** tab, click **Attach Policy** and then select a policy like `AmazonS3FullAccess`. For some reason, it's not enough to say that a bucket grants access to a user - you also have to say that the user has permissions to access the S3 service. – sameers Jul 06 '16 at 06:04
  • Applying this, as is, will throw "Missing required field Principal". The answer might need an update. – Greg Wozniak Mar 10 '23 at 09:11
  • Thanks, @GregWozniak. I think that this is likely due to changes that have occurred in AWS config in the last 8 years. – Karl Wilbur Mar 10 '23 at 18:51
10

Change resource arn:aws:s3:::bucketname/AWSLogs/123123123123/* to arn:aws:s3:::bucketname/* to have full rights to bucketname

bennie j
  • 729
  • 4
  • 8
9

for show website static in s3:

unckeched blocking bucket public

This is bucket policies:

{
  "Version":"2012-10-17",
  "Statement":[{
  "Sid":"PublicReadGetObject",
    "Effect":"Allow",
  "Principal": "*",
  "Action":["s3:GetObject"],
  "Resource":["arn:aws:s3:::example-bucket/*"
  ]
  }
]
}
5

Use below method for uploading any file for public readable form using TransferUtility in Android.

transferUtility.upload(String bucketName, String key, File file, CannedAccessControlList cannedAcl)

Example

transferUtility.upload("MY_BUCKET_NAME", "FileName", your_file, CannedAccessControlList.PublicRead);
Roger Ng
  • 771
  • 11
  • 28
varotariya vajsi
  • 3,965
  • 37
  • 39
3

To clarify: It is really not documented well, but you need two access statements.

In addition to your statement that allows actions to resource "arn:aws:s3:::bucketname/AWSLogs/123123123123/*", you also need a second statement that allows ListBucket to "arn:aws:s3:::bucketname", because internally the Aws client will try to list the bucket to determine it exists before doing its action.

With the second statement, it should look like:

"Statement": [
    {
        "Sid": "someSID",
        "Action": "ActionThatYouMeantToAllow",
        "Effect": "Allow",
        "Resource": "arn:aws:s3:::bucketname/AWSLogs/123123123123/*",
        "Principal": {
            "AWS": [
                "arn:aws:iam::123123123123:user/myuid"
            ]
    },
    {
        "Sid": "someOtherSID",
        "Action": "ListBucket",
        "Effect": "Allow",
        "Resource": "arn:aws:s3:::bucketname",
        "Principal": {
            "AWS": [
                "arn:aws:iam::123123123123:user/myuid"
            ]
    }
]

Note: If you're using IAM, skip the "Principal" part.

chaqke
  • 1,497
  • 17
  • 23
3

If you have an encrypted bucket, you will need kms allowed.

2

Possible reason: if files have been put/copy by another AWS Account user then you can not access the file since still file owner is not you. The AWS account user who has been placed files in your directory has to grant access during a put or copy operation.

For a put operation, the object owner can run this command:

aws s3api put-object --bucket destination_awsexamplebucket --key dir-1/my_images.tar.bz2 --body my_images.tar.bz2 --acl bucket-owner-full-control

For a copy operation of a single object, the object owner can run one of these commands:

aws s3api copy-object --bucket destination_awsexammplebucket --key source_awsexamplebucket/myobject --acl bucket-owner-full-control

ref : AWS Link

Gajendra
  • 1,291
  • 1
  • 19
  • 32
  • This is exactly my issue. Is there a way to upload a file and grant it access to anyone? I'm uploading to s3 using the Powershell Tool command Write-S3Object I do not supply any key to the command and would expect the object to be accessible to anyone. – Kappacake Aug 12 '20 at 11:02
  • 1
    Just found the answer: I just need to add -PublicReadWrite to the command, and the object will be accessible to anyone – Kappacake Aug 12 '20 at 11:08
1

Giving public access to Bucket to add policy is NOT A RIGHT way. This exposes your bucket to public even for a short amount of time.

You will face this error even if you are admin access (Root user will not face it) According to aws documentation you have to add "PutBucketPolicy" to you IAM user.

So Simply add a S3 Policy to you IAM User as in below screenshot , mention your Bucket ARN for make it safer and you don't have to make you bucket public again.

enter image description here

dev
  • 732
  • 2
  • 8
  • 29
  • Man, this seems about right, but your explanations is missing a bunch of context. Could you explain yourself in more detail? – nicoramirezdev Dec 19 '20 at 21:27
  • if you can tell me which part you didn't get may be I will be able to update the answer. Still my point was to attach a PutBucketPolicy role to bucket owner it will give more control t(through IAM) and security. – dev Dec 20 '20 at 07:00
  • I would suggest you to explain how to arrive to the point where you took the screenshot, specially for people starting with AWS, finding this things are particularly confusing. Also, adding the documentationn that you are citting would be really useful, since we could check if it is still up to date. – nicoramirezdev Dec 22 '20 at 10:22
1

No one metioned MFA. For Amazon users who have enabled MFA, please use this: aws s3 ls s3://bucket-name --profile mfa.

And prepare the profile mfa first by running aws sts get-session-token --serial-number arn:aws:iam::123456789012:mfa/user-name --token-code 928371 --duration 129600. (replace 123456789012, user-name and 928371). enter image description here

Lane
  • 4,682
  • 1
  • 36
  • 20
1

This can also happen if the encryption algorithm in the S3 parameters is missing. If bucket's default encryption is set to enabled, ex. Amazon S3-managed keys (SSE-S3), you need to pass ServerSideEncryption: "AES256"|"aws:kms"|string to your bucket's param.

const params = {
    Bucket: BUCKET_NAME,
    Body: content,
    Key: fileKey,
    ContentType: "audio/m4a",
    ServerSideEncryption: "AES256" // Here ..
}
    
await S3.putObject(params).promise()
Nathan Getachew
  • 783
  • 5
  • 16
-1

Go to this link and generate a Policy. In the Principal field give *

In the Actions set the Get Objects

Give the ARN as arn:aws:s3:::<bucket_name>/*

Then add statement and then generate policy, you will get a JSON file and then just copy that file and paste it in the Bucket Policy.

For More Details go here.

Pronoy999
  • 645
  • 6
  • 25