My goal is to access an csv file that's located on a S3 bucket. The file has the following columns: event_id, ds, yhat, yhat_lower, yhat_upper
. I found the following example here:
>>> import csv
>>> with open('eggs.csv', newline='') as csvfile:
... spamreader = csv.reader(csvfile, delimiter=' ', quotechar='|')
... for row in spamreader:
... print(', '.join(row))
However, what's not solved here is how to apply that directly on an S3 bucket. That's my code how I currently try to access the file:
BUCKET_NAME = 'fbprophet'
FORECAST_DATA_OBJECT = 'forecast.csv'
s3 = boto3.client(
's3',
aws_access_key_id=settings.ML_AWS_ACCESS_KEY_ID,
aws_secret_access_key=settings.ML_AWS_SECRET_ACCESS_KEY,
)
# 's3' is a key word. create connection to S3 using default config and all buckets within S3
csv_obj = s3.get_object(Bucket=BUCKET_NAME, Key=FORECAST_DATA_OBJECT)
Update:
obj = s3.get_object(Bucket=BUCKET_NAME, Key=FORECAST_DATA_OBJECT)
data = obj['Body'].read()
spamreader = csv.reader(data, delimiter=' ', quotechar='|')
for row in spamreader:
print(', '.join(row))