1

I've mounted a public s3 bucket to aws ec2 instance using Goofys (kind of similar to s3fs), which will let me access files in the s3 bucket on my ec2 instance as if they were local paths. I want to use these files in my aws lambda function, passing in these local paths to the event parameter in aws lambda in python. Given that AWS lambda has a storage limit of 512 MB, is there a way I can give aws lambda access to the files on my ec2 instance?

AWS lambda really works well for my purpose (I'm trying to calculate a statistical correlation between 2 files, which takes 1-1.5 seconds), so it'd be great if anyone knows a way to make this work.

Appreciate the help.

EDIT:

In my AWS lambda function, I am using the python library pyranges, which expects local paths to files.

khc
  • 344
  • 2
  • 8
user38242
  • 25
  • 5
  • 1
    These types of tools that "mount" S3 are really just making the same API calls that your application itself can make. For example, listing a bucket and downloading a file. The only benefit of using a mounting tool is that it can work with software that expects a traditional local filesystem. If you are writing your own code (eg AWS Lambda function), then it would be better to call the Amazon S3 APIs directly. – John Rotenstein Dec 31 '19 at 22:38

1 Answers1

1

In my AWS lambda function, I am using the python library pyranges, which expects local paths to files.

You have a few options:

  • Have your Lambda function first download the files locally to the /tmp folder, using boto3, before invoking pyranges.
  • Possibly use S3Fs to emulate file handles for S3 objects.
Mark B
  • 183,023
  • 24
  • 297
  • 295