6

Because NServiceBus doesn't seem to support adding a priority mechanism to the message queue, I want to implement this myself.

  • Command handler (Producer):
public void Handle(DoAnAction message)
{
  _messageManager.Insert(message);
}
  • Single Consumer (different thread):
public void Run()
{
  DoAnAction message;
  while (true) 
  {
    if (_messageManager.TryDequeue(out message)) 
    {
      doALongCall(message);
    }
    else 
    {
      Thread.sleep(200);
    }
  }
}

Is this even a good idea to begin with? I don't like the idea that I can lose messages this way.

Update: The use case: we have many clients who can send a message DoAnAction. The handling of this Action takes a while. Problem is when 1 client decides to send 200 DoAnActions, all other clients have to wait 2-3 hours so all these messages can be processed (FIFO). Instead I would like to process these messages in an order based on the client.

So even if client A still has 200 messages to process, when client B sends a message it will get queued next. If client B would send 3 messages, the queue would look like this: B-A, B-A, B-A, A, A, A, ...

andy
  • 531
  • 5
  • 16
  • You can use [handler ordering](https://docs.particular.net/nservicebus/handlers/handler-ordering). – Dmytro Mukalov Apr 09 '19 at 12:00
  • @DmytroMukalov It's not the handler order that is important here, but it's the order of which (same type) messages get processed first. I've editted the question. – andy Apr 09 '19 at 12:30
  • In that case you definitely need some intermediate storage (with random access) for the messages in order to define your custom fairness strategy. – Dmytro Mukalov Apr 10 '19 at 09:40
  • @DmytroMukalov that's how I'm implementing it right now. There's no persistence storage of that intermedia storage (a List and an active Queue), however that might still be added at a later stage. – andy Apr 11 '19 at 06:31

2 Answers2

2

Often times, I have found that the real reason to do this comes from a business standpoint. In that case, it makes sense to model it in the code to reflect the rules and regulations around the business.

Imagine you have a SendEmail message but you want to 'prioritize' some of the messages based on the customer it is intended for. You can design your system differently so that you have two message types, one the regular SendEmail and one called SendPriorityEmail which goes to a different endpoint/queue. You will need to determine in your code which message to send.

Separating the messages at the root means you have more flexibility (that also come from the business) which comes useful when doing monitoring, SLAs and quality of service type of things when it comes to the more important customers (in this case).

Hadi Eskandari
  • 25,575
  • 8
  • 51
  • 65
  • I agree that the model should reflect all business rules. I think having 2 queues will not reflect this. There is no type of message that gets priority. The priority gets determined by current backlog of messages, if you understand. Every client treated equally, very 2019 ;) – andy Apr 10 '19 at 07:45
  • Not sure what you mean by the priority getting determined by the backlog of messages, could you elaborate? – Hadi Eskandari Apr 11 '19 at 09:18
  • I made an update to my question. Basically I want to solve the situation where 200 messages from client A are waiting to be handled on the bus, and client B/C/D also start adding messages. I don't want to process all messages from client A first, I want to process: "ABCD ABCD ... " – andy Apr 11 '19 at 12:19
  • The priority queue is not supported by most of the underlying technology and due to the nature of the 'queue' of most of the transport, it is not possible. You might want to read up on this issue which is about the same issue (although around RabbitMQ transport that has some level of support) – Hadi Eskandari Apr 13 '19 at 01:30
0

You can use Sagas to do what you are looking for, essentially having one saga instance per client ID. The saga serves as a "choke point" and can make sure only N messages at a time are being processed for each client.

This might reduce the throughput below maximum capacity under lower levels of load, but could result in the more "fair" distribution you're trying to achieve.

Does that make sense?

Udi Dahan
  • 11,932
  • 1
  • 27
  • 35
Sean Farmar
  • 2,254
  • 13
  • 10
  • I understand Sagas. However I don't see this how this would be a good fit, could you elaborate? I've updated the question with some more information. – andy Apr 10 '19 at 07:35
  • 1
    I'd be happy to get on a call and discuss, email me at sean.farmar@particular.net – Sean Farmar Apr 10 '19 at 13:25
  • Added more detail about using sagas - hope this helps. – Udi Dahan Apr 13 '19 at 17:37
  • Thanks @UdiDahan, it's a quick win indeed, but the mentioned disadvantage under low level of load was not acceptable in this case. – andy Apr 16 '19 at 14:39
  • @andy Because it's a "quick win", I suggest you do a quick spike of this approach before you dismiss it on grounds of performance. I don't think you'd have any significant user impact at low load. – Udi Dahan Apr 17 '19 at 09:19