1

Mongo db operations are getting starved in a rabbit mq consumer .

rabbitConn.createChannel(function(err, channel) {
channel.consume(q.queue, async function(msg) {
    // The consumer listens to messages on Queue A for suppose based on a binding key.

    await Conversations.findOneAndUpdate(
        {'_id': 'someID'},
        {'$push': {'messages': {'body': 'message body'}}}, function(error, count) {
            // Passing a call back  so that the query is executed immediately as mentioned in the
            // mongoose document http://mongoosejs.com/docs/api.html#model_Model.findOneAndUpdate
        });
    });
});

The problem is if there are a large number of messages being read the mongo operations are getting starved and executed when the queue has no more messages. So if there are 1000 messages in the queue.The 1000 messages are read first and then and then mongo operation is getting called.

  1. Would running the workers in a different nodejs process work ?

Ans: Tried doing this decoupling the workers from the main thread, does not help.

  1. I have also written a load balancer with 10 workers but that does not seem to help, is the event loop not prioritizing the mongo operations ?

Ans: Does not help either the 10 workers read from the queue and only execute the findOneAndUpdate once there is nothing more to read from the queue.

Any help would be appreciated.

Thank you

  • There's really not enough to go on here, but try setting the pre-fetch count to something like 1 or 2 (you'll also have to acknowledge the messages when done) and see if that helps. – theMayer Jul 12 '18 at 12:59
  • Also, is the mongo operation being called once per message, or once overall? – theMayer Jul 12 '18 at 13:00
  • The mongo operation is called once per message. I will try setting the prefetch count. I think I have found a solution. I tried using bulk write by writing 50 messages worked like a charm. Will test some more. Thanks for your prompt reply. – Nissim Kurle Jul 12 '18 at 13:19
  • Essentially, with no prefetch and auto-ack set on, you have no message queuing. Messages will go straight from the publisher to the subscriber. – theMayer Jul 12 '18 at 14:23

1 Answers1

0

Based on the description of the problem, I think you have a case of no message queuing happening. This can happen when you have a bunch of messages sitting in the queue, then subscribe a consumer with auto-ack set to true and no prefetch count. This answer describes in a bit more detail what happens in this case.

If I had to guess, I'd say the javascript code is spending all of its allocated cycles downloading messages from the broker rather than processing them into Mongo. Adding a prefetch count, while simultaneously disabling auto-ack may solve your issue.

theMayer
  • 15,456
  • 7
  • 58
  • 90