2

Using RabbitMQ 3.7.16, with spring-amqp 2.2.3.RELEASE.

Multiple clients publish messages to the DataExchange topic exchange in our RabbitMQ server, using a unique routing key. In the absence of any bindings, the exchange will route all the messaged to the data.queue.generic through the AE.

When a certain client (client ID 1 and 2 in the diagram) publishes lots of messages, in order to scale the consumption of their messages independently from other clients, we are starting consumers and assign them to only handle a their client ID. To achieve this, each client-consumer is defining a new queue, and it binds it to the topic exchange with the routing key events.<clientID>.

So scaling up is covered and works well.

Now when the messages rate for this client goes down, we would like to also scale down its consumers, up to the point of removing all of them. The intention is to then have all those messages being routed to the GenericExchange, where there's a pool of generic consumers taking care of them.

The problem is that if I delete data.queue.2 (in order to remove its binding which will lead to new messages being routed to the GenericExchange) all its pending messages will be lost.

Here's a simplified architecture view:

Dead letter messages from expired/deleted queue

It would be an acceptable solution to let the messages expire with a TTL in the client queue, and then dead letter them to the generic exchange, but then I also need to stop the topic exchange from routing new messages to this "dying" queue.

So what options do I have to stop the topic exchange from routing messages to the client queue where now there's no consumer connected to it?

Or to explore another path - how to dead letter messages in a deleted/expired queue?

Bogdan Minciu
  • 428
  • 1
  • 5
  • 13
  • 1
    I have a very similar question: https://stackoverflow.com/questions/59259227/rabbitmq-how-to-dead-letter-process-messages-in-expired-queues – GreenSaguaro Feb 11 '20 at 23:53

1 Answers1

1

If the client queue is the only one with a matching binding as your explanation seems to suggest, you can just remove the binding between the exchange and the queue.

From then on, all new messages for the client will go through the alternate exchange, your "generic exchange", to be processed by your generic consumers.

As for the messages left over in the client queue, you could use a shovel to send them back to the topic exchange, for them to be routed to the generic exchange.

This based on the assumption the alternate exchange is internal. If it's not internal, you can target it directly with the shovel.

As discussed with Bogdan, another option to resolve this while ensuring no message loss is occuring is to perform multiple steps:

  • remove the binding between the specific queue and the exchange
  • have some logic to have the remaining messages be either consumed or rerouted to the generic queue
    • if the binding removal occurs prior to the consumer(s) disconnect, have the last consumer disconnect only once the queue is empty
    • if the binding removal occurs after the last consumer disconnect, then have a TTL on messages with alternate exchange as the generic exchange
  • depending on the options selected before, have some cleanup mecanism to remove the lingering empty queues
Olivier
  • 2,571
  • 1
  • 17
  • 25
  • Thanks for your answer. Yes, the client queue is the only one with a matching binding. However, the binding is created dynamically when the first consumer connects. So it should also be removed dynamically, from the code, a few seconds after the last consumer disconnects. Is there an event I could listen to, in order to trigger the mechanism you suggest? – Bogdan Minciu Feb 12 '20 at 10:34
  • Could you clarify a bit more your architecture, specifically how many consumers you could end up with on a single queue? You seemed to indicate single consumer, which could allow for a different approach. – Olivier Feb 12 '20 at 14:06
  • I've added an architecture sketch. We're expecting all the queues to have multiple consumers at some point. The more load we notice on a certain `clientID`, the more consumers will be started to listen on that queue. The consumers are orchestrated by Kubernetes, so there's no control on their number, at least not from the broker's perspective. – Bogdan Minciu Feb 12 '20 at 20:44
  • Thanks for all the clarification, but so what you'd need is for the binding to be deleted *before* the last consumer is removed, no? With it deleted before, you'd have the last consumer consume any of the leftover messages, and could even consider setting up the auto-delete function on the queue? Or is there no control on the logic of consumer deletion that could ensure the queue is empty prior to disconnect? – Olivier Feb 14 '20 at 12:39
  • Happy you say that - it's exactly how I've solved this in the end. When the number of consumers are scaled down to zero, we're doing an API call to the broker to remove the binding. The messages in the queue have a TTL, and their DLX is the AE mentioned before - so there's no need to shovel them, they will simply die and be rerouted to the generic queue. – Bogdan Minciu Feb 14 '20 at 21:01
  • If you slightly update your answer to reflect the recent details, I'll accept it. This is just to avoid confusing future readers, your suggestions were nevertheless helpful! – Bogdan Minciu Feb 14 '20 at 21:03
  • Hi @BogdanMinciu, updated the answer. Another option would be for you to provide your own answer, as you took the time to sketch your architecture and discussion was only the basis for the solution you selected. Cheers. – Olivier Feb 17 '20 at 08:11