0

I'm testing NServiceBus with Azure Queue backend. I configured NServiceBus using all the default settings and have this code to send a message:

while ((data = Console.ReadLine()) != null)
{
   Stopwatch sw = new Stopwatch();
   sw.Start();
   Bus.Send("testqueue", new Message() {Data = data});
   sw.Stop();
   Console.WriteLine("Sent time: " + sw.ElapsedMilliseconds);
}

When running on my dev machine, it takes ~700ms to send a message to the queue. The queue is far away, ~350ms when writing directly using Azure Storage client.

Now I have two questions:

  1. I don't want the thread to block on the Bus.Send call. One option is to the use async\await pattern. Another option is to have an in memory queue for delivering messages, similarly to 0MQ. The last option doesn't guarantee delivery of-course, but assuming there are some monitoring capabilities, I can live with that.
  2. Why does sending a message take twice the time of a simple write to the queue? Can this be optimized?
Alon Catz
  • 2,417
  • 1
  • 19
  • 23

1 Answers1

0

What is the size of the data property?

I just ran this test myself (using the string "Whatever" as data) and I see an average latency of ~50ms for every remote send, with a throttle every 15 seconds making the calls take around ~300ms at that point (this is expected).

Do note that azure storage is a remote http based service and is therefore subject to latency due to distance, it has no published performance targets here either as far as I know. Furthermore it has active throttling in place to push back when data is being moved around, which happens roughly every 15 seconds (See my storage internals talk to understand what is going on behind the scenes. http://www.slideshare.net/YvesGoeleven/azure-storage-deep-dive)

On the topic of async/await. If your purpose is to unblock the UI thread, than go ahead and do it this way...

await Task.Factory.StartNew(() => _bus.Send(new Message{
       Whatever = data
})).ConfigureAwait(false);

If your purpose is to achieve a higher throughput, you should use more sending threads instead, as a thread needs to be waiting for the http response anyway, which is either the sending thread or the background thread fired from async/await. Do note however that every queue is also throttled individually (at several hundreds of msgs/sec) no matter how many sending threats you use

PS: IT's also advised to change following settings on the .net servicepoint manager, to optimize it for lots of small http requests

ServicePointManager.UseNagleAlgorithm = false;
ServicePointManager.Expect100Continue = false;
ServicePointManager.DefaultConnectionLimit = 48;

Hope this helps...

Yves Goeleven
  • 2,185
  • 15
  • 13
  • Thanks for your response. The problem is not with the long latencies. 300ms from my dev machine is expected. The problem is that there is no async\await option and a thread must be blocked while a message is being sent. We expect to serve 1000 requests per second on one machine. If we go this way, we will have at least 1000 threads just spinning and waiting for responses from NServiceBus. When you do async\await the thread is released for the duration of the wait. Otherwise, the thread just sits there in Wait state and blocks. – Alon Catz Jul 10 '14 at 05:35
  • It's on the teams radar to provide that, but for now I suggest you use multiple send threads by wrapping your sends in a parallel for for example – Yves Goeleven Jul 10 '14 at 07:46
  • And use multiple queues as well btw, you won't be able to stuff thousands of messages through a single queue consistently no matter how many threads you use – Yves Goeleven Jul 10 '14 at 07:47