I have one lambda function to test the URLs using puppeteer and chrome. When I invoke 50 lambdas at the same time chrome is not able to load all the passed URLs. What could be the reason for it? I suspect it shares the CPU with time slicing.
Asked
Active
Viewed 66 times
0

John Rotenstein
- 241,921
- 22
- 380
- 470

Kaushal panchal
- 435
- 2
- 11
-
What is your lambda memory configuration? I've read that it needs at least 512MB. See here: https://oxylabs.io/blog/puppeteer-on-aws-lambda – brushtakopo Nov 16 '22 at 08:53
-
What do you mean by "not able to load all the passed URLs"? What happens and what errors are you receiving? How many URLs are you passing to each individual Lambda function? How are you invoking "50 lambdas at the same time"? – John Rotenstein Nov 16 '22 at 09:22
1 Answers
0
One of the best features of AWS Lambda functions is scalability. It means it will increase the needed resources to perform the task. It is impossible to share the CPU because it will destroy the whole concept of Serverless in Lambda Functions. BUT, these scenarios could be your problem:
- Multiple invocations at the same will share
/tmp
directory. Your code might store more than allowed ephemeral storage in your invocation which might be the reason of your problem. I suggest checking to invocation logs to see if you can find any errors for regarding the ephemeral storage. - As you said, you are sending 50 requests at same time. If the target server is just a single server, it might be flooded and the memory might get full. In that case, the server can't respond to you anymore.

Leo
- 867
- 1
- 15
- 41