5

I'm trying to upload images to an S3 bucket using the serverless framework. When I call the endpoint after deploy the code fails with an Access Denied error. What am I doing wrong?

The error using 'serverless logs -f fileDownload':

ERROR   Unhandled Promise Rejection     {"errorType":"Runtime.UnhandledPromiseRejection","errorMessage":"AccessDenied: Access Denied","reason":{"errorType":"AccessDenied","errorMessage":"Access Denied","code":"AccessDenied","message":"Access Denied","region":null,"time":"2020-05-08T14:06:11.767Z","requestId":"874D7C86A4C6BE45","extendedRequestId":"r8xyvcrK9su5c+slhX5L/uh4/Y/sdFnUgPcebHpSTNpbnf39EnAZJET750P8t0iXy8UR81SiYZc=","statusCode":403,"retryable":false,"retryDelay":17.606445772028543,"stack":
["AccessDenied: Access Denied"
,"    at Request.extractError (/var/task/node_modules/aws-sdk/lib/services/s3.js:835:35)"
,"    at Request.callListeners (/var/task/node_modules/aws-sdk/lib/sequential_executor.js:106:20)"
,"    at Request.emit (/var/task/node_modules/aws-sdk/lib/sequential_executor.js:78:10)"
,"    at Request.emit (/var/task/node_modules/aws-sdk/lib/request.js:683:14)"
,"    at Request.transition (/var/task/node_modules/aws-sdk/lib/request.js:22:10)"
,"    at AcceptorStateMachine.runTo (/var/task/node_modules/aws-sdk/lib/state_machine.js:14:12)"
,"    at /var/task/node_modules/aws-sdk/lib/state_machine.js:26:10"
,"    at Request.<anonymous> (/var/task/node_modules/aws-sdk/lib/request.js:38:9)"
,"    at Request.<anonymous> (/var/task/node_modules/aws-sdk/lib/request.js:685:12)"
,"    at Request.callListeners (/var/task/node_modules/aws-sdk/lib/sequential_executor.js:116:18)"
]}
,"promise":{},"stack":
["Runtime.UnhandledPromiseRejection: AccessDenied: Access Denied"
,"    at process.<anonymous> (/var/runtime/index.js:35:15)"
,"    at process.emit (events.js:310:20)"
,"    at process.EventEmitter.emit (domain.js:482:12)"
,"    at processPromiseRejections (internal/process/promises.js:209:33)"
,"    at processTicksAndRejections (internal/process/task_queues.js:98:32)"]}

The serverless.yml:

service: serverless-resize-image-s3

custom:
  # This line should create the bucket. Strange though that I don't see the bucket when
  # I login to the AWS console. Even stranger is that when I tried to create the bucket
  # using the console I get an error saying the bucket exists, even though its invisible.
  bucket: files
  region: us-east-1
  default_stage: prod
  apigwBinary: 
    types:
      - '*/*'

plugins:
  - serverless-apigw-binary
  - serverless-apigwy-binary
  # Offline is needed to run the thing in a docker container and test using minio
  # This is the only part of the code that actually works at the moment.
  - serverless-offline

provider:
  name: aws
  runtime: nodejs12.x
  stage: ${opt:stage, self:custom.default_stage}
  # I've seen a number of variations on this theme, so far no configuration I've tried
  # has resulted in the AccessDenied error disappearing
  iamRoleStatements:
    - Effect: 'Allow'
      Action:
        # I don't explicitly list anything but I read somewhere that a 404 can turn into a
        # 403 if this right doesn't exist
        - 's3:ListBucket'
      # Found somebody saying that the arn should not have the '/*' for ListBucket, I guess that
      # does make sense
      Resource: "arn:aws:s3:::*"
    - Effect: 'Allow'
      Action:
        - 's3:PutObject'
        - 's3:GetObject'
      # Found somebody saying that a reference to somewhere else in the yml didn't work for him
      # And somebody else suggested just replacing the whole thing with a *
      Resource: "arn:aws:s3:::*/*"
      #Resource: "arn:aws:s3:::files/*"
      #Resource: "arn:aws:s3:::${self:custom.bucket}/*"

package:
  excludeDevDependencies: true
  exclude:
# I thought that excluding aws-sdk would be necessary in order to use the global
# one instead. But even with this line here I still get my AccessDenied errors.
    - node_modules/aws-sdk

# If I ignore everything in node_modules I get 'Cannot find module' errors
# But allowing each individual module can take a while. A dir list shows 268
# entries.
#    - node_modules/**
#    - '!node_modules/serverless-http/**'
#    - '!node_modules/express/**'
#    - '!node_modules/depd/**'
#    - '!node_modules/merge-descriptors/**'

functions:
  fileUpload:
    handler: upload.app
    events:
      - http: put /v1/upload
  fileDownload:
    handler: download.app
    events:
      # This is the endpoint I'm testing the s3 query with, its simpler than v2.
      - http: get /v1/download
      - http:
          method: get
          path: /v2/download
          # This is the part I actually want to test, found a post somewhere that said the
          # serverless-apigwy-binary plugin will use this to turn my base64 data into binary.
          # Hopefully that will allow me to see my image in the browser
          contentHandling: CONVERT_TO_BINARY
  imageResize:
    handler: image.app
    events:
      - http: get /v1/image

The source of download.js:

'use strict';

const serverless = require('serverless-http');
const express = require('express');
const AWS = require('aws-sdk');
const fs = require('fs');

const app = express();
const s3 = new AWS.S3();

app.get('/v1/download', async (req, res, cb) => {
    var fileKey = req.query['id'];

    const data = await s3.getObject({ Bucket: 'files', Key: fileKey }).promise();

    res.setHeader('Content-Type', 'image/png')
    res.end(data.toString('base64'));
});

module.exports.app = serverless(app);

Any help would be appreciated. And it seems I'm not allowed to post the question until there is more text instead of just code.

DV82XL
  • 5,350
  • 5
  • 30
  • 59

2 Answers2

7

To upload to bucket, I'm just using this way:

iamRoleStatements:
    - Effect: Allow
      Action:
        - s3:PutObject
      Resource: "arn:aws:s3:::my-bucket/*"

I see you are using two - Effect: Allow, maybe problem it's there. Try to use just one. Or you can try to use effect who upload first, just for testing:

 iamRoleStatements:
    - Effect: 'Allow'
      Action:
        - 's3:PutObject'
        - 's3:GetObject'
      Resource: "arn:aws:s3:::*/*"
    - Effect: 'Allow'
      Action:
        - 's3:ListBucket'
      Resource: "arn:aws:s3:::*"

I am assuming that your user permissions are enabled. If not, that's for sure. Enable permission on AWS IAM.

DV82XL
  • 5,350
  • 5
  • 30
  • 59
  • I already found my problem. I was using the name 'files' for the bucket. Which I don't own. As soon as I picked a name that didn't exist yet everything was fine. What a **** issue to get stuck on :) – Jurgen Voorneveld May 12 '20 at 13:54
  • See my answer below, https://stackoverflow.com/a/63126994/2434307. Depending on what you're trying to do with the file during upload to S3, you might need permission to other S3 actions as well. This article also talks about similar problem, https://aws.amazon.com/premiumsupport/knowledge-center/s3-403-upload-bucket/ – Jose Quijada Jul 28 '20 at 04:26
0

I had similar sounding issue and it turned out to be I was missing s3:PutObjectTagging privilege on my Lambda role. My Node.js app is adding tagging when it uploads, below are the parameters I pass to S3.put(),

const uploadParams = {
          Bucket: chunk.bucketName,
          Key: chunk.filePath,
          Tagging: 'created_by=MY-TAG',
          Body: passThrough
        }

Funny thing is when I ran the code from my local it uploaded fine, only when the Node.js ran in the cloud environment is when it complained. For full details checkout my reply here.

Jose Quijada
  • 558
  • 6
  • 13