2

We have been experimenting with RabbitMQ. During those experiments we have seen some strange behavior with respect to a worker queue.

One of the observations is that the read performance of a queue does not change when we add or remove processes that read from that queue. For example: when a single process handles messages at a rate of 800 msg/sec. Adding a second (similar) process has the result of both processes handling messages at a rate of 400 msg/sec each, with again 800 msg/sec as total. When we shutdown one of the processes, the msg rate of the other process increases to (you guessed) 800 msg/sec.

This is not what we expected. Why does the throughput not double when we add a second reader to a queue?

We are using the RabbitMQ .NET client (in combination with EasyNetQ Advanced API). We have the publisher confirms switched on, use a prefetch_count (Qos) of 50, Ack messages when they have been processed, and use durable exchanges, queues and messages.

We must be doing something wrong, any pointer in the right direction is very welcome...

Joost Reuzel
  • 308
  • 1
  • 10
  • Are there non consumed messages on the queue ? 800/second (which is very low for rabbit) could very well be the rate at which you publish... – C4stor Jun 06 '13 at 14:19
  • The queue is quite full, I pump 100,000 messages in there and then start the consumers. Note that the vm we are experimenting with is quite small. But still. Here http://rabbitmq.1065348.n5.nabble.com/Durability-and-consumer-acknowledgement-extremely-slow-td26194.html#a26226 someone encountered the same issue, and it appears normal behavior. We are going to experiment with auto-ack to see the difference, but that will take some recoding of easynetq... Will post the results. – Joost Reuzel Jun 09 '13 at 19:59
  • Do you need persistent messages ? Writing every message to disk can be pretty consuming. Other than that, if you're in trouble with easynetq (which I never tried), to my taste the standard RabbitMQClient nuget is working well :) – C4stor Jun 10 '13 at 08:25
  • Sorry for the late response. Yes, we do need persistent messages to they survive (RabbitMQ) server restarts. That said, we found a few bottlenecks in our code and upped the performance to what can be expected of a single small VM instance (4000 msg/sec). The described behavior did not change though. When reading at full speed, the load is spread among the queue consumers. We will start trying other balancing techniques (using a different exchange), and experiment with synchronized queues over multiple machines... – Joost Reuzel Jun 24 '13 at 15:25

0 Answers0