I have a topic with multiple subscribed clients running with default prefetch. If one of the client is slow, it slows down other subscribed clients. I'd like to dynamically lower the prefetch limt for slow consumers but since clients slow randomly this would need to be done dynamically.
I'd like to prototype the following solution: Create queues for each subscriber. A pool of threads will remove events from a topic and copy the events to my queue. Now since I have queue for each subscriber, each client is independent of each other. I will set prefetch limit to each queue. Once that limit is reached, I will drop the events. Drawback: Memory is required for each queue now.
I would like some views about above solution or any other solution which you think might fit for my case.
I have added more details below for my usecase:
listener1 processing speed: 142 rps -
listener2 processing speed: 10 rps
event producing speed - 100 rps
default prefetch limit: 32000
case 1: when prefetch limit is equal for both listeners. within ~ 761 seconds - topic gets full before it starts dropping events.
case 2: when prefetch limit of slow consumer is less than prefetch limit of fast consumer listener2 prefetch limit: 64K Above solution works well
Not sometimes listener 2 processing speed increases and listener 1 processing speed decreases (note the processing speed does not exactly reverses but I am using extreme values) and that where case2 does not work. Now listener1: 10 rps listener2: 142 rps It takes 1523 seconds for topic to get full before it starts dropping events. Once it starts dropping events listener 1 would also start processing at the same speed as listener 2.
I'm looking for suggestions for getting each listener running independently and not blocking others?