I've noticed a number of other questions talking about lambda concurrency, but none seem to have been exactly the same as my issue so here's a new one:
I have a current lambda function using python requests that takes an SQS trigger. I have a reserved concurrency limit set, but when testing the code firing 1000 SQS triggers, only 1 or 2 instances of the function run.
The SQS trigger settings are as follows:
Activate trigger: No;
Batch size: 1000;
Batch window: 100;
Concurrent batches per shard: None;
Last processing result: None;
Maximum age of record: None;
Maximum concurrency: 800;
On-failure destination: None;
Report batch item failures: Yes;
Retry attempts: None;
Split batch on error: None;
Starting position: None;
Tumbling window duration: None;
The lambda concurrency settings are as follows:
Function concurrency:
Use reserved concurrency;
Reserved concurrency:
800;
The general configuration of the lambda function is as follows:
Memory:
128MB;
Ephemeral storage:
512MB;
Timeout:
15min0sec;
SnapStartInfo:
None;
The SQS function is set to use a Dead Letter Queue, but nothing is reaching there. As far as I can tell these messages get stuck in flight and then seem to dissapear. Lambda has ran a maximum of 6 concurrent executions when testing my script, but normally it's about 2, with 1 failed.
I send the messages to the SQS using a python script using boto3. I know these reach the SQS properly, because the received message metrics are shown to be about 1000 after running the code.
If anyone has any ideas, please share them. Am at a complete loss.