Events "replay" can easily be handled within the aggregate pattern because applying events does not cause new transactions, but rather the state is rehydrated.
It's important to have only event appliers in the aggregate constructor when it's instantiated out of a list of ordered events.
That's pretty much event sourcing. But there are potential problems when expanding this into event driven architecture (EDA) where an entity/aggregate/microservice/module reacts to an event by initiating another transaction.
In your example, an entity A produces an event A. The entity B reacts to event A by sending a new command, or starting a new transaction that ends up producing an event B.
So right now the event store has event A and event B.
How to ensure a replay or a new read of that stream or all streams doesn't cause a write amplify? Because as soon as event A handlers reads the event won't know if it's the first time it has handled it (and has to initiate the next transaction, command B --> event B, or if it's a replay and doesn't have to do anything about it because it already happened and there is an event B already in the stream.
I am assuming this is your concern, and it's a big one if the event reaction implies making a payment, for example. We wouldn't want to make a new payment each time the event A is handled.
There are a few options:
- Never replay events in systems that react to events by creating new transactions. Never replay events unless it's for aggregate instantiation (event sourcing) which uses events just to re-hydrate state, or unless it's for projection/read models that are idempotent or when the projections are being recreated (because DB was dropped for example)
- Another option is to react to an event A by appending a command B to a "command stream" (i.e: queue) and have the command handler receive it asynchronously and create the transaction that produces event B. This way, you can rely on the Event store duplicate check and prevent the append of a command if it already exist. The scenario would be similar to this:
A. Transaction A produces event A which is appended to an event store stream
B. Event Handler A reacts to event A and adds a command B to a command stream
C. Command handler B receives the command B and executes transaction that produces an event B appended to the stream.
Now the first time this would work as expected.
If the projections that use event A and event B to write in DB a read model replay events, all is good. It reads event A and then event B.
If the "reactive" event handlers receive event A again, it attempts to append a command B to the command stream. The event/command store detects that command B is a duplicate (optimistic concurrency control using some versioning) and doesn't add it. Command handler B never gets the old command again.
It's important to notice that the processed commands should result in a checkpoint that is never deleted, so that commands are never ever replayed. That's the key.
Probably there are also other mechanisms out there.