I got asked this question today and the answer seem to divide opinion among my team.
Scenario You have multiple publishers sending events (messages) to RabbitMQ (via EasyNetQ) about certain topics. They said they guarantee FIFO. They want to architect a system that guarantees "processing" of messages for a "topic" in order.
My solution Have a cache that holds a "version number" per topic and hold processing of the message if the sequence didn't match. You can retry processing of the message (time delayed retry) after the first event processing completes and updates the cache to the new target version. This means that the consumer is basically waiting on another consumer to complete processing. This works for things that take magnitudes of milliseconds not seconds as it's kind of a lock.
Alternatively I said we could implement a holding table for events out of order like this http://blog.jonathanoliver.com/cqrs-out-of-sequence-messages-and-read-models/
The person who raised the question said both those answers were incorrect.
The solution they said was to use a routing key and a direct exchange and have topics always go the same consumer. Kind of a sticky load balancing system. I pointed out that this limits the on demand scalability of the system as exchanges/bindings would need to be updated depending on the number of consumers up at any point in time.
I would really like to get the opinion of someone who's implemented this pattern before. Is there a right and wrong solution here or is it a case of choosing the right strategy based on processing delays, scalability, etc?
EDIT: To clarify. I'm expecting to find pros and cons of each approach to determine what solution fits better for my context. Are there any pitfalls etc?