0

I have a windows service that is attempting to consume messages from some activemq queue's. However, it is only getting some of the messages and others are getting stuck in 'messages pending' in the queue. ActiveMQ tells me it has enqueued lets say 500 messages to the consumer but only 300 were dequeued. There is more than one listener being set up in the service. Here's the important part of the code:

private void setupListener(string queue, string brokerUri)
{
    try
    {
        ISession session = connectionConsumers[brokerUri].CreateSession();
        session.CreateConsumer(session.GetQueue(queue))
               .Listener += new MessageListener(consumer_Listener);
    }
    catch (Exception ex)
    {
        Log.Error("An exception has occured setting up listener for " + queue + " on " + brokerUri + ": {0}, {1}", ex, ex.Message);
    }
}

void consumer_Listener(IMessage message)
{
    try
    {
        processLog((message as ITextMessage).Text);
        message.Acknowledge();
    }
    catch (NMSException ex)
    {
        Log.Error("ActiveMQ Connection Failure: {0}, {1}", ex, ex.Message);
    }
    catch (Exception ex)
    {
        Log.Error("An exception has occured trying to process a message: {0}, {1}", ex, ex.Message);
    }
}

Is there something wrong with the way I'm acknowledging messages that would cause certain ones to not be acknowledged? Is it a concurrency issue? I'm not sure if they are all still going through the processLog function (added to my database).

EDIT: I think it has more to do with acknowledgements not happening properly (for some reason). I am not getting exceptions thrown in my logs. However, activemq shows the following: Dispatch Queue is being filled

From what I've read, the dispatch queue is being filled with messages that were sent to the consumer but not acknowledged. Why could this be?

Andrew G
  • 412
  • 1
  • 5
  • 14
  • What is the pre-fetch size for your consumers? Are the consumers pooled? – Ralf Jun 18 '14 at 13:02
  • The pre-fetch size would be the default, so 1000. The consumers are not pooled. – Andrew G Jun 18 '14 at 14:08
  • If you set the pre-fetch to 0 (polling), does the problem go away? It should not be a problem as long as the consumers are not pooled, but it is the only thing I can think of right now. – Ralf Jun 18 '14 at 15:03
  • I tried changing it by changing my connection uri to 'tcp://server:61616?consumer.prefetchSize=0' Nothing changed. However in my ActiveMQ manager it states the prefetch is still set to 1000, so maybe I set it wrong. – Andrew G Jun 18 '14 at 15:41
  • Try `tcp://server:61616?jms.prefetchPolicy.all=0` to globally disable pre-fetch for all consumers. See [here](http://activemq.apache.org/what-is-the-prefetch-limit-for.html) for details. – Ralf Jun 18 '14 at 16:00
  • I had tried that previously. Still no dice – Andrew G Jun 18 '14 at 18:55
  • Typically that happens with pre-fetch > 0 and pooled consumers. But as you say you are not pooling the consumers and you (tried to) disabled the pre-fetch I don't know what causes this in your case. – Ralf Jun 19 '14 at 06:37
  • Thanks for trying @Ralf, I think it has something to do with 'acknowledge' not being called each time. ActiveMQ tells me I have a Dispatched Queue with the unconsumed messages in it. I believe this means the messages were sent by the queues but not acknowledged by the consumer. – Andrew G Jun 19 '14 at 13:33
  • As I said, the problem of un-acked messages can arise if the messages are pushed into the pre-fetch buffer of a consumer, but the consumer is neither used nor closed because it is idle in a consumer pool. – Ralf Jun 19 '14 at 13:50

1 Answers1

-1

The problem had to do with our queues being virtual destinations.

Andrew G
  • 412
  • 1
  • 5
  • 14
  • I know it's been many years since this was 'answered' but it would have been much better if the accepted answer included more information about the actual problem and resolution. – mihalios Feb 06 '22 at 11:20
  • I was young and lazy - I wish I remember how this was resolved so that I could add the details! – Andrew G Mar 17 '22 at 03:16