1

I would like to have a (high capacity fifo) queue I can put items into, but also control the rate of items leaving the queue.

It seems like SQS rather focuses on processing items in the queue as fast as possible, with no direct control about the outflow. Even with a SQS FIFO queue I don't see a good way to control the throughput of items leaving the queue.

Even using the Visibility Timeout seems to allow only for a very inefficient back pressure implementation.

Is there a better AWS service for this use case?

Or is there a good approach using SQS that I don't see yet?

tcurdt
  • 14,518
  • 10
  • 57
  • 72

1 Answers1

2

The whole idea of a queue is to decouple the producer from the consumer, which means that the producer produces messages a certain rate and the consumer consumes them at different rate.

If you want to consume messages at a lower rate, you have to adjust your consumer to pull messages at that certain rate. You don't necessary have to consume every message whenever it arrives, you can leave them on the queue and consume them when you can. For example, you can have a consumer which pulls messages periodically.

If your consumer is a Lambda, you can set the reserved concurrency limit for it to have a certain number of functions running in parallel. If you want a more drastic limit, you can have the Lambda be triggered by CloudWatch scheduled event. Although, I would not really recommend the latest approach, because it would not really scale.

Ervin Szilagyi
  • 14,274
  • 2
  • 25
  • 40
  • Indeed decoupling is what what queues are about. Unfortunately even a concurrency limit of 1 on the lambda would not allow for enough control. Using scheduled events has multiple problems, but scaling is one of them. And that's how I ended up asking the question. – tcurdt Sep 20 '21 at 11:47