1

I am attempting to access a text file stored in an AWS S3 bucket. At present, it contains only the word "Test".

At first, I thought I was having problems with fs.readfile, but now I've discovered that the problem is more fundamental. I cannot access the file at all. AWS from Node.js does not appear to be able to see the file at all.

I am using the following Lambda function:

const aws = require('aws-sdk');
const s3 = new aws.S3({ apiVersion: '2006-03-01' });

    exports.handler = async (event, context, callback) => { 
    // Get the object from the event and show its content type
    const bucket = event.Records[0].s3.bucket.name;
    const key = event.Records[0].s3.object.key; // decodeURIComponent(event.Records[0].s3.object.key.replace(/\+/g, ' '));
    console.log('Bucket = ' + bucket);
    console.log('key = ' + key);

    var params = {Bucket: bucket, Key: key};
    console.log('Checking file existence');
    console.log(params);
    console.log('Calling s3.getObject');
    s3.getObject(params, function(err, data) { 
        console.log('S3.getObject called');
        console.log('err = ' + err);
        if (err) {
            console.log(err, err.stack); // an error occurred
            callback(err);
        } else {
            console.log(data);           // successful response
            callback(null, null);
        }
        console.log('Leaving s3.getObject');
    });
};

The testbed function contains the following code:

{
  "Records": [
    {
      "eventVersion": "2.0",
      "eventSource": "aws:s3",
      "awsRegion": "us-east-1",
      "eventTime": "1970-01-01T00:00:00.000Z",
      "eventName": "ObjectCreated:Put",
      "userIdentity": {
        "principalId": "EXAMPLE"
      },
      "requestParameters": {
        "sourceIPAddress": "127.0.0.1"
      },
      "responseElements": {
        "x-amz-request-id": "EXAMPLE123456789",
        "x-amz-id-2": "EXAMPLE123/5678abcdefghijklambdaisawesome/mnopqrstuvwxyzABCDEFGH"
      },
      "s3": {
        "s3SchemaVersion": "1.0",
        "configurationId": "testConfigRule",
        "bucket": {
          "name": "wgtiplists",
          "ownerIdentity": {
            "principalId": "EXAMPLE"
          },
          "arn": "arn:aws:s3:::wgtiplists"
        },
        "object": {
          "key": "tiplist.txt",
          "size": 1024,
          "eTag": "0123456789abcdef0123456789abcdef",
          "sequencer": "0A1B2C3D4E5F678901"
        }
      }
    }
  ]
}

And the results look like this (I've trimmed off the timestamp and Request ID texts):

INFO    Bucket = s3://wgtiplists
INFO    key = tiplist.txt
INFO    Checking file existence
INFO    { Bucket: 's3://wgtiplists', Key: 'tiplist.txt' }
INFO    Calling s3.getObject

From this, I conclude that the S3 function is not being called, although I might be mistaken.

What am I doing wrong?

Spoc42
  • 11
  • 1
  • 3
  • The incorrect constructor invocation aside, this code actually seems to execute getObject (when run outside of Lambda). Suspect that your Lambda function has exited before the getObject call returns. Change your code to some variant of `return s3.getObject(params).promise();` or await the result of that. – jarmod Mar 11 '21 at 14:41
  • Unfortunately, the change does not make any difference. I still get the same response. – Spoc42 Mar 11 '21 at 18:00
  • Can you update your post by adding the latest incarnation of the code, that fixes the aforementioned problems (incorrect constructor call, incorrect bucket name, Lambda handler not returning a promise) so we can see where you are currently. Also, include the CW Logs. – jarmod Mar 11 '21 at 18:13
  • Also, please try the code at https://stackoverflow.com/a/30655755/271415 – jarmod Mar 11 '21 at 20:00
  • I've done that. It still gives the same faulty result. – Spoc42 Mar 13 '21 at 03:47

2 Answers2

1

I believe you need parenthesis here:

const s3 = new aws.S3();

You should not be adding s3:// to the bucket name, just do this:

const bucket = event.Records[0].s3.bucket.name;
Mark B
  • 183,023
  • 24
  • 297
  • 295
0

Look for the Cloudwatch logs. You might get some insight from there. Probably, as @Marck B mentioned, should be given bucket name is wrong

Dilum Darshana
  • 187
  • 1
  • 4