2

I have a topic with multiple subscribed clients running with default prefetch. If one of the client is slow, it slows down other subscribed clients. I'd like to dynamically lower the prefetch limt for slow consumers but since clients slow randomly this would need to be done dynamically.

I'd like to prototype the following solution: Create queues for each subscriber. A pool of threads will remove events from a topic and copy the events to my queue. Now since I have queue for each subscriber, each client is independent of each other. I will set prefetch limit to each queue. Once that limit is reached, I will drop the events. Drawback: Memory is required for each queue now.

I would like some views about above solution or any other solution which you think might fit for my case.

I have added more details below for my usecase:
listener1 processing speed: 142 rps - listener2 processing speed: 10 rps

event producing speed - 100 rps

default prefetch limit: 32000

case 1: when prefetch limit is equal for both listeners. within ~ 761 seconds - topic gets full before it starts dropping events.

case 2: when prefetch limit of slow consumer is less than prefetch limit of fast consumer listener2 prefetch limit: 64K Above solution works well

Not sometimes listener 2 processing speed increases and listener 1 processing speed decreases (note the processing speed does not exactly reverses but I am using extreme values) and that where case2 does not work. Now listener1: 10 rps listener2: 142 rps It takes 1523 seconds for topic to get full before it starts dropping events. Once it starts dropping events listener 1 would also start processing at the same speed as listener 2.

I'm looking for suggestions for getting each listener running independently and not blocking others?

eebbesen
  • 5,070
  • 8
  • 48
  • 70

1 Answers1

1

Have you looked at the documentation on the ActiveMQ page for dealing with slow consumers. Basically the strategy is to use a pending message limit stratagy to have the broker start throwing out older messages for consumers that are moving slow and causing a backup. Since slow consumers cause a build of messages on the Broker you start to reach the configure limits which causes the Broker to slowdown Producers. By enacting a pending message limit you prevent the buildup which slows things down.

You can also turn off producer flow control to allow the producers to keep going at their normal rate and just let messages spool to disk until the disk reaches its limits as well, this is covered in the documentation as well.

Tim Bish
  • 17,475
  • 4
  • 32
  • 42
  • I did go through the documentation and I have producerflowcontrol disabled. Now what happens is my throughput is dependent on the slow consumer and my slow consumer is not fixed. Hence I cannot use pending message limit for one particular subscriber. Because that suscriber might be slow after some time. – Jaikit Savla Mar 16 '13 at 17:20
  • If you want better answer then its good to document what you've tried already, and what the results you are seeing are. – Tim Bish Mar 16 '13 at 18:16
  • 1
    Thanks Tim for answering. I have edited my question above with more details. – Jaikit Savla Mar 17 '13 at 00:33