To me this seemed like a simple use case when I started, but it turned out a lot harder than I had anticipated.
Problem
I have an AWS SQS acting as a job queue that triggers a worker AWS Lambda. However since the worker lambdas are sharing non-scalable resources it is important to limit the number of concurrent running lambdas to (for the sake of example) no more than 5 lambdas running simultaneously.
Simple enough, according to Managing Concurrency for a Lambda Function
Reserved concurrency also limits the maximum concurrency for the function, and applies to the function as a whole
However, setting the Reserved concurrency
-property to 5 seems to be completely ignored by SQS, with the queue Messages in Flight
-property in my case showing closer to 20-30 concurrent executions depending on the amount of messages put into the queue.
Question
The closest I have come to a solution is to use a SQS FIFO queue and setting the MessageGroupId to a value of either randomly selecting or alternating between 1-5. However, due to uneven workload this is not optimal as it would be better to have the concurrency distributed by actual workload rather than by chance.
I have also tried using the AWS Step Functions as the Map-state has a MaxConcurrency parameter, which seemed to work well on small job queues, but due to each state having an input/output limit of 32kb, this was not feasible in my use-case.
Has anyone found a better or alternative solution? Are there any other ways Reserved concurrency
is supposed to be used?
Similar
Here are some similar questions I have found, but I think my question is different because I am not interested in limiting the total number of invocation, and (although I have not tried it myself) I can not see why triggers from S3 or Kinesis Steam would behave different from SQS.