15

I have a bucket for my organization in Amazon S3 which looks like mydev.orgname

  • I have a Java application that can connect to Amazon S3 with the credentials and can connect to S3, create, read files

  • I have a requirement where a application reads the data from Python from same bucket. So I am using boto for this.

I do the following in oder to get the bucket

>>> import boto
>>> from boto.s3.connection import S3Connection
>>> from boto.s3.key import Key
>>> 
>>> conn = S3Connection('xxxxxxxxxxx', 'yyyyyyyyyyyyyyyyyyyyyy')
>>> conn
S3Connection:s3.amazonaws.com

Now when I try to get bucket I see error

>>> b = conn.get_bucket('mydev.myorg')
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/Library/Python/2.7/site-packages/boto/s3/connection.py", line 389, in get_bucket
    bucket.get_all_keys(headers, maxkeys=0)
  File "/Library/Python/2.7/site-packages/boto/s3/bucket.py", line 367, in get_all_keys
    '', headers, **params)
  File "/Library/Python/2.7/site-packages/boto/s3/bucket.py", line 334, in _get_all
    response.status, response.reason, body)
boto.exception.S3ResponseError: S3ResponseError: 403 Forbidden
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>EEC05E43AF3E00F3</RequestId><HostId>v7HHmhJaLLQJZYkZ7sL4nqvJDS9yfrhfKQCgh4i8Tx+QsxKaub50OPiYrh3JjQbJ</HostId></Error>

But from the Java application everything seems to work.

Am I doing anything wrong here?

Chris Martin
  • 30,334
  • 10
  • 78
  • 137
daydreamer
  • 87,243
  • 191
  • 450
  • 722

4 Answers4

41

Giving the user a "stronger role" is not the correct solution. This is simply a problem with boto library usage. Clearly, you don't need extra permissions when using Java S3 library.

Correct way to use boto in this case is:

b = conn.get_bucket('my-bucket', validate=False)
k = b.get_key('my/cool/object.txt') # will send HEAD request to S3
...

Basically, boto by default (which is a mistake on their part IMHO), assumes you want to interact with S3 bucket. Granted, sometimes you do want that, but then you should use credentials that have permissions for S3 bucket operations. But a more popular use case is to interact with S3 objects, and in this case you don't need any special bucket-level permissions, hence the use of validate=False kwarg.

Pavel Repin
  • 30,663
  • 1
  • 34
  • 41
4

this answer work for me :)

I did

  • S3 bucket policy setting
  • time setting
  • bucket = conn.get_bucket(BUCKET_NAME, validate=False)
Wonjun Hwang
  • 51
  • 1
  • 3
0

After giving my "User" much stronger role, this error was gone. Means User given the permission to get_bucket

daydreamer
  • 87,243
  • 191
  • 450
  • 722
  • 1
    This is not an answer. It is a comment. It should have be an *edit to* the question or a *comment on* the question. – Bruno Bronosky Mar 16 '15 at 22:39
  • 1
    @LegoStormtroopr, you really should be more careful before leaving these kinds of comments on an answer... daydreamer *is* the question author. – Sheridan Mar 17 '15 at 09:07
  • 1
    @BrunoBronosky, you should also be careful leaving these comments. Question authors are entirely allowed to provide answers to their own questions. – Sheridan Mar 17 '15 at 09:08
  • 1
    @daydreamer, in future, please leave more useful answers to your questions, or instead simply add a summary edit to your question to highlight your solution... this would stop over keen users from mistakenly leaving these kinds of comments on your answers. – Sheridan Mar 17 '15 at 09:11
  • 3
    3 years and this answer still has 0 votes. I'm going to stand by this being a comment not an answer. More specifically **an answer** would say what permissions were changed rather than "much stronger role". But then it would still be the **wrong solution** (for cases where you can control the code). The [highest voted answer](https://stackoverflow.com/a/14490668/117471) is the right solution (given the same caveats) because it works around the unexpected behavior in boto. – Bruno Bronosky Apr 30 '18 at 19:45
0

Read files from Amazon S3 bucket using Python

import boto3
import csv

# get a handle on s3
session = boto3.Session(
                aws_access_key_id='XXXXXXXXXXXXXXXXXXXXXXX',
                aws_secret_access_key='XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX',
                region_name='XXXXXXXXXX')

s3 = session.resource('s3')

# get a handle on the bucket that holds your file
bucket = s3.Bucket('bucket name') # example: energy_market_procesing

# get a handle on the object you want (i.e. your file)
obj = bucket.Object(key='file to read') # example: market/zone1/data.csv

# get the object
response = obj.get()

# read the contents of the file
lines = response['Body'].read()

# saving the file data in a new file test.csv
with open('test.csv', 'wb') as file:
    file.write(lines)
Ajeet Verma
  • 2,938
  • 3
  • 13
  • 24