0

This seems like a pretty basic question, but I seem to be losing messages when the consumer falls over before acknowledging them. I have set up the broker with an exchange audit:exchange and a queue bound to it audti:queue. Both are durable, and as expected if I send messages when no consumer is active they sit on the queue and get processed by the consumer when it starts up. However if I put a break point in the consumer and kill the process half way through, the message is not requeued - it just seems to get lost. The consumer is set up using the annotation

@RabbitListener(queues="audit:queue")
public void process(Message message) {
    routeMessage(message)  //stop here and kill process - message removed from q
}
Gary Russell
  • 166,535
  • 14
  • 146
  • 179
Alasdair54
  • 21
  • 2

2 Answers2

0

I can't reproduce your issue.

With the breakpoint triggered, I see the message still in the queue (unacked=1) on the rabbit console.

When the process is killed; the message goes back to ready.

Have you configured the listener container factory to use Acknowledgemode.NONE?

That will exhibit the behavior you describe.

The default is AUTO which means the message will only be acknowledged when the listener returns successfully.

If you still think there's an issue; please supply the complete test case.

Gary Russell
  • 166,535
  • 14
  • 146
  • 179
  • Thanks for the response. I should have mentioned I seem to be using spring-rabbit 1.4.5 if this is relevant. Also I did check the rabbit doc which mentioned in the Reliable Delivery section: "If a message is delivered to a consumer and then requeued (**because it was not acknowledged before the consumer connection dropped, for example**) ... " which is the behaviour I was expecting. – Alasdair54 Nov 27 '15 at 15:58
  • I will try to put together a simple example .. difficult as our work machies are not connected to internet and I am not allowed to mail code to myself, but I will try to do this over the weekend. – Alasdair54 Nov 27 '15 at 16:00
  • Damn I keep hitting enter and the comment gets posted. I am definitely using acks - when a message is being processed I see the unacked count go up to 1 (and the ready count go to 0) in the admin console. Then after killing the consumer, after a couple of secinds the unacked count goes down to 0, and the ready count is still 0. If my cnsumer throws an exceotn the expected behaviour is seen - the message goes back on the queue and gets redelivered – Alasdair54 Nov 27 '15 at 16:03
  • As I said; I have a simple test case and I simply don't see the behavior you are seeing; when I kill the java task, the message goes back to ready. Are you sure you don't have some other consumer running someplace that's getting the message? (Trust me; I have encountered such a situation before and it drove me nuts). You can see a list of consumers on the queue's page on the management console. – Gary Russell Nov 27 '15 at 16:20
0

Sorry this was my bad (I just wasted a few hours .. sigh). I was killing the app from within my ide. Which probably detaches and then kills the process - allowing time for it to proceed just enough that it actually does send the ack. When I just killed the process from a terminal it worked exactly as expected. Particualr apologies to you Gary for wasting your time as well.

Alasdair54
  • 21
  • 2