0

I'm working with Azure Service Bus Queues in a request/response pattern using two queues and in general it is working well. I'm using pretty simple code from some good examples I've found. My queues are between web and worker roles, using MVC4, Visual Studio 2012 and .NET 4.5.

During some stress testing, I end up overloading my system and some responses are not delivered before the client gives up (which I will fix, not the point of this question).

When this happens, I end up with many messages left in my response queue, all well beyond their ExpiresAtUtc time. My message TimeToLive is set for 5 minutes.

When I look at the properties for a message still in the queue, it is clearly set to expire in the past, with a TimeToLive of 5 minutes.

I create the queues if they don't exist with the following code:

namespaceManager.CreateQueue(
                    new QueueDescription( RequestQueueName )
                    {
                        RequiresSession = true,
                        DefaultMessageTimeToLive = TimeSpan.FromMinutes( 5 ) // messages expire if not handled within 5 minutes
                    } );

What would cause a message to remain in a queue long after it is set to expire?

Keith Murray
  • 635
  • 1
  • 5
  • 15

1 Answers1

1

As I understand it, there is no background process cleaning these up, only the act of moving the queue cursor forward with a call to Receive will cause the server to skip past and dispose of messages which are expired and actually return the first message that is not expired or none if all are expired.

Drew Marsh
  • 33,111
  • 3
  • 82
  • 100
  • Well that's interesting and not quite what I expected. Though I am constantly receiving new responses on the queue, which should clear out the expired messages if it is working as you suggest. I receive messages with sessions using `AcceptMessageSessionAsync`. Could this be ignoring / skipping the expired messages? Also I am using the Service Bus Explorer app [link](http://code.msdn.microsoft.com/windowsazure/Service-Bus-Explorer-f2abca5a) to monitor the queues, and selecting the Receive All Messages menu item only pulls one message from the queue. – Keith Murray Jul 16 '13 at 21:51
  • Great question and I cannot say for sure. The behavior is kind of undocumented as far as I can tell. It's not hard to imagine that sessions do affect exactly which messages are processed by a receive. Ultimately though, if you're not actually receiving the message, do you really care if they're there? One other suggestion might be to set EnableDeadLetteringOnMessageExpiration = true and see if that changes the behavior, but then you have a dead letter queue you'll need to tend to yourself. – Drew Marsh Jul 16 '13 at 22:30
  • Thanks for the additional comments. In my case, I do care that there are messages clogging the queue as many of them have a payload. Eventually my queue will fill up. I'll experiment with dead lettering and see what happens. – Keith Murray Jul 16 '13 at 23:11
  • Are they really clogging you in the sense that you're unable to read other messages or do you just mean they are using up over all space in the queue? My suspicion is that you're still seeing them because the specific session is not being actively received from. – Drew Marsh Jul 16 '13 at 23:36
  • Yes, the messages remaining in the queue are `BrokeredMessage`s sent to a specific `SessionId` that contain the response that the initial requester is waiting for. I think in these cases, the requester WebRole has timed out and therefore not reading any more messages with that particular `SessionId`. My assumption was that these 'orphaned' messages would time out and disappear from the queue based on the `TimeToLive` parameter. – Keith Murray Jul 17 '13 at 00:16
  • And on a side note, I've been running some stress testing this afternoon and my queues are working smooth as butter with nothing being left behind, even when my roles have appeared to be overwhelmed at times. I didn't change any code, but did delete the response queue before I started this last round of testing. Perhaps there was an Azure infrastructure problem that was affecting my previous queue? – Keith Murray Jul 17 '13 at 00:19