1

I am looking for a message queue with an in-built throttling feature. Use case is that the recipient worker pool may accept a lot of messages but a service that workers depend on may not be able to handle the load. It's not possible to reduce the worker pool since the worker instances handle different types of messages.

So the feature I am looking for is throttling based on a topic. Say a topic T, I want the queue to accept as many messages from the producers, but throttle the demand from consumers on topic T to say deliver only 5 messages per minute.

Juzer Ali
  • 4,109
  • 3
  • 35
  • 62
  • 1
    I don't know the answer to this question, but as a comment this is not a good idea. I would approach this problem the other way around: your service is the one that should have the throttling mechanism if it can't handle the load, so it'll force your workers to wait, so it'll make the message broker queue the messages. This way your whole architecture is more resilient. – rlanvin Jan 12 '18 at 12:32
  • Sometimes you don't have control over the services. I am curious whether such a thing is available in present technological landscape. – Juzer Ali Jan 12 '18 at 20:01
  • Well if you don't have control over the service, then it's not your problem. Just hammer it and whoever is responsible for the service will be forced to build a throttling mechanism. :-) More seriously, in my opinion it's not the message broker's job to know about how much a particular service (not even a worker) can handle. Could be your worker's responsibility maybe, but even then I'm not convinced. – rlanvin Jan 12 '18 at 20:15
  • @rlanvin As much as you would want to parallelize computation jobs, its often the database which is a bottleneck. It would be a nice to have feature to have to be able to tweak the rate of consumption based on database load. – Juzer Ali Jan 13 '18 at 07:06
  • What are you going to do when your producers outpace your consumers? Based on your description, this is a likely scenario. – Luke Bakken Jan 14 '18 at 15:33
  • @LukeBakken My question pertains to a specific use case. Sometimes load on database is very high and sometimes, usually during non-working hours, database is ideal. I know that database usually performs reasonably under load of 'x' processes per minute. So now I have an upper limit of processes that I can run in parallel. That's why I need the throttling feature. As to your question, when producer outpace consumer, I'll let messages sit in queue until producers rate goes down. My producers cannot outpace consumers indefinitely, that's my use case. – Juzer Ali Jan 15 '18 at 09:16

2 Answers2

1

For Java following solutions may work, They should be also available for node

  • You can throttle messages by controlling your consumer's poll() calls.
  • Try to sleep thread between call to poll()

  • Try to use more timeout in poll() and use following property
    MAX_POLL_RECORDS_CONFIG to control how many messages you will
    receive in a single poll.

donm
  • 1,160
  • 13
  • 20
0

If you are using IronMQ, you can throttle messages if you are using "pull" queues. This feature will need to be done manually from the user's code. If you were to use "push" queues, you would not be able to throttle messages. However, your consumers will receive the messages at the highest rate. Here is a link describing push and pull queues: https://dev.iron.io/mq/3/reference/push_queues/index.html

If you have additional questions, please reach out to Iron.io support via chat, email, or phone.

  • While this link may answer the question, it is better to include the essential parts of the answer here and provide the link for reference. Link-only answers can become invalid if the linked page changes. - [From Review](/review/low-quality-posts/27592316) – Cleptus Nov 11 '20 at 07:47
  • Ok. Thanks. I'm new to the platform. – Nick Campion Nov 11 '20 at 20:51