98

I have a folder in a bucket with 10,000 files. There seems to be no way to upload them and make them public straight away. So I uploaded them all, they're private, and I need to make them all public.

I've tried the aws console, it just gives an error (works fine with folders with less files).

I've tried using S3 organizing in Firefox, same thing.

Is there some software or some script I can run to make all these public?

BenMorel
  • 34,448
  • 50
  • 182
  • 322
PeterV
  • 2,792
  • 3
  • 20
  • 22
  • 4
    Every tool I tried crashed, so I ended up writing a PHP script that took a few hours and just looped through every object in the bucket and made it public. – PeterV Jul 01 '10 at 11:54

10 Answers10

126

You can generate a bucket policy (see example below) which gives access to all the files in the bucket. The bucket policy can be added to a bucket through AWS console.

{
    "Id": "...",
    "Statement": [ {
        "Sid": "...",
        "Action": [
            "s3:GetObject"
        ],
        "Effect": "Allow",
        "Resource": "arn:aws:s3:::bucket/*",
        "Principal": {
            "AWS": [ "*" ]
        }
    } ]
}

Also look at following policy generator tool provided by Amazon.

http://awspolicygen.s3.amazonaws.com/policygen.html

moka
  • 22,846
  • 4
  • 51
  • 67
Rajiv
  • 2,352
  • 1
  • 22
  • 25
  • 6
    This did not work for me. Some objects are still returning the 'access denied' response even with the bucket policy in place. It's copy-pasted from the above with only the bucket name changed. I guess it's time to write a script to loop through all 1.3 million objects... kinda irritating – Blake Miller Aug 01 '12 at 22:18
  • you need to change "bucket" to the name of your bucket – karnage Oct 30 '12 at 21:55
  • 13
    I resent having to do it this way. That's some ugly JSON. – superluminary Feb 07 '13 at 10:56
  • 7
    Just a note: It may seem obvious, but you can also choose to limit access to specific _folders_: `bucket/avatars/*`. (Don't forget the `*` at the end. I did and I ran around in circles for a while.) – bschaeffer Sep 12 '13 at 12:25
  • FYI, Here is my implementation (ruby) of limiting access to a folder https://gist.github.com/cutalion/6762806 – cutalion Sep 30 '13 at 12:11
  • Awesome, +1 although I somewhat agree with @superluminary, it's scary to have to use such an obscure (for AWS neophytes) JSON for such a basic requirement. – BenMorel Mar 21 '14 at 16:21
  • 2
    @Benjamin What is "basic" configuration for you is inappropriate for others, because everyone's security requirements are different. AWS provides a uniform way to customize these policies. Therefore, one must take the time to learn security policies properly and not shy away from a few simple lines of JSON. – afilina Dec 29 '17 at 16:36
  • I put this in and it works, AWS flashes a warning now on the policy tab: "You have provided public access to this bucket. We highly recommend that you never grant any kind of public access to your S3 bucket." I'd appreciate it if they even shared a link to explain how ELSE to enable downloading at URL level (i have to share a link to download the file) – Nikhil VJ Jul 27 '18 at 12:48
  • You have a number of options here. Make the object public, make the bucket public, share a pre-signed URL. – jarmod Jan 13 '20 at 00:40
  • @BlakeMiller: this seem to work, though it takes some time to fully propagate... – Ikar Pohorský May 22 '20 at 10:37
79

If you are uploading for the first time, you can set the files to be public on upload on the command line:

aws s3 sync . s3://my-bucket/path --acl public-read

As documented in Using High-Level s3 Commands with the AWS Command Line Interface

Unfortunately it only applies the ACL when the files are uploaded. It does not (in my testing) apply the ACL to already uploaded files.

If you do want to update existing objects, you used to be able to sync the bucket to itself, but this seems to have stopped working.

[Not working anymore] This can be done from the command line:

aws s3 sync s3://my-bucket/path s3://my-bucket/path --acl public-read

(So this no longer answers the question, but leaving answer for reference as it used to work.)

David Roussel
  • 5,788
  • 1
  • 30
  • 35
34

I had to change several hundred thousand objects. I fired up an EC2 instance to run this, which makes it all go faster. You'll want to install the aws-sdk gem first.

Here's the code:

require 'rubygems'
require 'aws-sdk'


# Change this stuff.
AWS.config({
    :access_key_id => 'YOURS_HERE',
    :secret_access_key => 'YOURS_HERE',
})
bucket_name = 'YOUR_BUCKET_NAME'


s3 = AWS::S3.new()
bucket = s3.buckets[bucket_name]
bucket.objects.each do |object|
    puts object.key
    object.acl = :public_read
end
Daniel Von Fange
  • 5,973
  • 3
  • 26
  • 23
  • 2
    The simple way is to upload them with the public_read flag set in the first place, but failing that, this is a good option. – superluminary Feb 07 '13 at 10:57
  • This code snipped is outdated, refer to my [answer](http://stackoverflow.com/questions/3142388/how-to-make-10-000-files-in-s3-public/31379500#31379500) – ksarunas Jul 13 '15 at 10:22
28

I had the same problem, solution by @DanielVonFange is outdated, as new version of SDK is out.

Adding code snippet that works for me right now with AWS Ruby SDK:

require 'aws-sdk'

Aws.config.update({
  region: 'REGION_CODE_HERE',
  credentials: Aws::Credentials.new(
    'ACCESS_KEY_ID_HERE',
    'SECRET_ACCESS_KEY_HERE'
  )
})
bucket_name = 'BUCKET_NAME_HERE'

s3 = Aws::S3::Resource.new
s3.bucket(bucket_name).objects.each do |object|
  puts object.key
  object.acl.put({ acl: 'public-read' })
end
ksarunas
  • 1,187
  • 2
  • 11
  • 16
  • 1
    Fantastic answer - just the script I needed in a tight spot – Phantomwhale Jan 14 '16 at 06:39
  • @ksarunas In my case, I need to change the public to private permissions so replace public-read with private and the access got changed but still, I'm able to access the URL? – Rahul Apr 20 '20 at 19:41
22

Just wanted to add that with the new S3 Console you can select your folder(s) and select Make public to make all files inside the folders public. It works as a background task so it should handle any number of files.

Make Public

Selcuk
  • 57,004
  • 12
  • 102
  • 110
  • 5
    Unfortunately it takes a long time and you can't close the browser while the command is runner. Your browser is sending 2 requests for each file, in my case the two requests took 500ms. If you have a lot of files it'll take a long time =( – Herlon Aguiar Mar 29 '18 at 20:08
  • 3
    And, there's another problem: this will make fully public. If you only want public-read access, that's a problem. – Marcelo Agimóvel Nov 09 '18 at 00:43
  • BE VERY AWARE - I did this Make Public and the "progress bar" that pops up is so subtle, I thought it was done. I checked and probably spent an hour working on this before I realized you click Make Public and small subtle "progress bar shows up"... grrr... since I closed the browser window about 10 times, I assume that killed it each time. I'm running it now - it is pretty quick - maybe 20 minutes for 120k images – Scott Mar 22 '20 at 20:56
20

Using the cli:

aws s3 ls s3://bucket-name --recursive > all_files.txt && grep .jpg all_files.txt > files.txt && cat files.txt | awk '{cmd="aws s3api put-object-acl --acl public-read --bucket bucket-name --key "$4;system(cmd)}'

Alexander Vitanov
  • 4,074
  • 2
  • 19
  • 22
  • 6
    couldn't you just use a pipe to grep instead of writing to disk with all files.txt? This can be `aws s3 ls s3://bucket-name --recursive | grep .jpg | awk '{cmd="aws s3api put-object-acl --acl public-read --bucket bucket-name --key "$4;system(cmd)}'` – sakurashinken Oct 08 '19 at 20:55
  • 1
    @sakurashinken answer works perfectly. If you see this. This is the one to use. – ajpieri Dec 01 '20 at 21:37
3

Had this need myself but the number of files makes it WAY to slow to do in serial. So I wrote a script that does it on iron.io's IronWorker service. Their 500 free compute hours per month are enough to handle even large buckets (and if you do exceed that the pricing is reasonable). Since it is done in parallel it completes in less than a minute for the 32,000 objects I had. Also I believe their servers run on EC2 so the communication between the job and S3 is quick.

Anybody is welcome to use my script for their own needs.

Eric Anderson
  • 3,692
  • 4
  • 31
  • 34
2

Have a look at BucketExplorer it manages bulk operations very well and is a solid S3 Client.

willbt
  • 1,875
  • 13
  • 11
  • 3
    It's also now possible to bulk change permissions in Cyberduck (free) via the Info palette. – Taylor D. Edmiston Jul 24 '14 at 00:52
  • BucketExplorer is only useful if you have permission to list all buckets. Much better to use the CLI or an SDK for this operation and leave your users with restricted permissions. – Loren_ Jun 30 '16 at 17:07
0

You would think they would make public read the default behavior, wouldn't you? : ) I shared your frustration while building a custom API to interface with S3 from a C# solution. Here is the snippet that accomplishes uploading an S3 object and setting it to public-read access by default:

public void Put(string bucketName, string id, byte[] bytes, string contentType, S3ACLType acl) {
     string uri = String.Format("https://{0}/{1}", BASE_SERVICE_URL, bucketName.ToLower());
     DreamMessage msg = DreamMessage.Ok(MimeType.BINARY, bytes);
     msg.Headers[DreamHeaders.CONTENT_TYPE] = contentType;
     msg.Headers[DreamHeaders.EXPECT] = "100-continue";
     msg.Headers[AWS_ACL_HEADER] = ToACLString(acl);
     try {
        Plug s3Client = Plug.New(uri).WithPreHandler(S3AuthenticationHeader);
        s3Client.At(id).Put(msg);
     } catch (Exception ex) {
        throw new ApplicationException(String.Format("S3 upload error: {0}", ex.Message));
     }
}

The ToACLString(acl) function returns public-read, BASE_SERVICE_URL is s3.amazonaws.com and the AWS_ACL_HEADER constant is x-amz-acl. The plug and DreamMessage stuff will likely look strange to you as we're using the Dream framework to streamline our http communications. Essentially we're doing an http PUT with the specified headers and a special header signature per aws specifications (see this page in the aws docs for examples of how to construct the authorization header).

To change an existing 1000 object ACLs you could write a script but it's probably easier to use a GUI tool to fix the immediate issue. The best I've used so far is from a company called cloudberry for S3; it looks like they have a free 15 day trial for at least one of their products. I've just verified that it will allow you to select multiple objects at once and set their ACL to public through the context menu. Enjoy the cloud!

Tahbaza
  • 9,486
  • 2
  • 26
  • 39
0

If your filenames have spaces, we can take Alexander Vitanov's answer above and run it through jq:

#!/bin/bash
# make every file public in a bucket example
bucket=www.example.com
IFS=$'\n' && for tricky_file in $(aws s3api list-objects --bucket "${bucket}" | jq -r '.Contents[].Key')
do
  echo $tricky_file
  aws s3api put-object-acl --acl public-read --bucket "${bucket}" --key "$tricky_file"
done
mike
  • 326
  • 1
  • 5
  • 14