0

I've seen similar issues on other threads but none with conclusive answers.

I'll spin up around 4 consumers (written in Ruby using the Bunny client gem) to subscribe to the same queue and process the messages and all works fine until about 20,000-40,000 messages are consumed. Then the consumers simply stop receiving messages. The connections/channels stay open and the server still recognizes the consumers but they just don't receive messages.

I don't think it's a pre-fetch issue as has been suggested in similar threads. I've set the pre-fetch at various levels and it doesn't solve the problem. The issue isn't that a single consumer is fetching all the messages before the others can - rather all consumers are stopped.

I'm using the hosted RabbitMQ service CloudAMQP so I thought it could be a performance issue there, but publishing messages is still working fine and I have the same problem regardless of the instance size I choose. Nothing strange looking in the logs.

I should add that I am explicitly acknowledging the messages using: ch.acknowledge(delivery_info.delivery_tag, false).

I'm a bit stumped here and really appreciate your help. Please let me know if I left out any important details.

Some example code:

ch = Bunny.new(connection_parameters).start.create_channel

ch.queue(queue).subscribe(consumer_tag: 'worker', block: true, manual_ack: true) do |delivery_info, _metadata, msg = q.pop|
     process_message msg
     ch.acknowledge(delivery_info.delivery_tag, false)
end
Gideon A.
  • 111
  • 2
  • 5
  • can you post some sample code in the question, that reproduces the problem? – Derick Bailey Jan 29 '16 at 14:35
  • thanks in advance Derick, I should add that the message processing isn't lightweight. It's handling various other http requests and whatnot. Is there a way to tell if it's a timeout/heartbeat issue? – Gideon A. Jan 30 '16 at 00:29

0 Answers0