93

I'm trying to pull a large file from S3 and write it to RDS using pandas dataframes.

I've been googling this error and haven't seen it anywhere, does anyone know what this extremely generic sounding error could mean? I've encountered memory issues previously but expanding the memory removed that error.

{
  "errorType": "Runtime.ExitError",
  "errorMessage": "RequestId: 99aa9711-ca93-4201-8b1e-73bf31b762a6 Error: Runtime exited with error: signal: killed"
}
mikemaccana
  • 110,530
  • 99
  • 389
  • 494
Matt Takao
  • 2,406
  • 3
  • 16
  • 30
  • 3
    Can you share your Lambda's logs? You're probably running out of memory again. If you're reading a large file from S3, be sure to read it in reasonably sized chunks and process each chunk before reading the next one. Don't read the entire file into memory at once. – Paul Nov 26 '19 at 19:44
  • thanks you're right, i just expanded my memory to the max and it worked. – Matt Takao Nov 26 '19 at 19:56

7 Answers7

132

Got the same error when executing the lambda for process an image, Only few results coming when searching in web for this error.

increase the AWS Lambda Memory by 1.5x OR 2x to resolve it. For example increase the memory from 128mb to 512mb.

This runtime error occurs because the lambda function does not execute remaining line of code, moreover it is not possible to catch the error and run the rest of the code.

configuration for AWS lambda memory

Bira
  • 4,531
  • 2
  • 27
  • 42
  • 1
    I have got the same error integrating sentry in an API server running express.js, and solved the same way. – Jacopo Jul 23 '20 at 17:11
  • 25
    Yeah, you would think they could at least give a more meaningful log. Hit the memory limit as well. – Overcode Sep 24 '20 at 02:09
  • 1
    Just to expand on this answer (and in reply to @Overcode) - the end line of the log actually specifies the max memory used during the invocation, which can help you diagnose whether the function running out of memory is actually the cause for error: `REPORT RequestId: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx Duration: 37091.65 ms Billed Duration: 37092 ms Memory Size: 1769 MB Max Memory Used: 1769 MB` – Marceli Wac Jul 10 '23 at 23:04
25

You're reaching memory limit due to boto3 parallel uploading of your file. You could increase memory usage for lambda, but it's cheating... you'll just pay more.

Per default, S3 cli downloads files larger than multipart_threshold=8MB with max_concurrency=10 parallel threads. It means it will use 80MB for your data, plus threading overhead.

You could reduce to max_concurrency=2 for example, it would use 16MB and it should fit into your lambda memory.

Please note that this may slightly decrease your downloading performance.

import boto3
from boto3.s3.transfer import TransferConfig

config = TransferConfig(max_concurrency=2)
s3 = boto3.client('s3')
s3.download_file('BUCKET_NAME', 'OBJECT_NAME', 'FILE_NAME', Config=config)

Reference: https://docs.aws.amazon.com/cli/latest/topic/s3-config.html

mikemaccana
  • 110,530
  • 99
  • 389
  • 494
Vincent J
  • 4,968
  • 4
  • 40
  • 50
6

It is not timing out at 15 minutes, since that would log this error "Task timed out after 901.02 seconds", which the OP did not get. As others have said, he is running out of memory.

Tom Hubbard
  • 121
  • 2
  • 6
3

First of all, aws-lambda is not meant to do long time heavy operations like pulling large files from S3 and write it to RDS.

This process can take too much time depending upon the file size and data. The maximum execution time of aws-lambda is 15 min. So, whatever task you are doing in your lambda should be completed with in the time limit you provided (Max is 15 min).

With large and heavy processing in lambda you can get out of memory error , out of time error or some times to need to extend your processing power.

The other way of doing such large and heavy processing is using AWS Glue Jobs which is aws managed ETL service.

M Hamza Razzaq
  • 432
  • 2
  • 7
  • 15
2
  • Solution is to increase the AWS Lambda Memory by 1.5x OR 2x,
  • bcoz this runtime error occurs, the lambda function does not execute any other line of code, it is not possible to catch the error and run the rest of the code.
  • this error acts as the signal to the lambda execution environment to terminate the current execution.
Ashish Sondagar
  • 917
  • 1
  • 9
  • 16
0

I had same issue. Increasing the lambda memory resolved the issue.

-1

To add, if anyone is using AWS Amplify as was in the project I was working on - there are still Lambda's under the hood, and you can access and configure them directly from the AWS Lambdas console

Oded Ben Dov
  • 9,936
  • 6
  • 38
  • 53