2

NEventStore: 5.1
Simple setup: WebApp (Asp.NET 4.5) == command-side

I'm searching for the "right" way for not losing commands, with an eye on sagas/process-managers which maybe would wait endlessly for an event produced from a command that was actually never handled.

Old: Dispatchers

I initially used sync commands, but with an eye on sagas/process-managers I thought it would be safer to first store them an then get them through SyncDispatcher (or AsyncDispatcher). Otherwise, that's my concern, if a saga would try to send a command and the command didn't finish due to app-crash/powerloss/..., it would be lost and noone would know.

So I created a command-stream and appended each command to that. The IsDispatched showed, if that command was already handled.
That worked.

PollingClient and Command-Stream

Now that the dispatchers are obsolete, I switched to PollingClient. What I lost is the Dispatched information.

A startup-issue arose:
I naively started polling from the current latest checkpoint going forward, but when the application restarted there was a chance that commands were stored but not executed before the crash and therefore lost (that actually happened).

I just came across the idea:
store the basic outcome of commands as (non-domain-)events in another stream.
This stream would contain CommandSucceeded and CommandFailed events.
Whenever the application starts the latest command-id or command-checkpoint-number gets extracted used to load the commands right after that one...

Questions

  • Are my concerns, that sync command-handling leads to the danger of losing a saga-generated command, wrong? If yes, why?
  • Is this generally a good idea: one big command stream?
  • Is this generally a good idea: store generic command-outcome-events in a stream?
David Rettenbacher
  • 5,088
  • 2
  • 36
  • 45

2 Answers2

4

You can:

  1. Store you command in a command queue | persistent log
  2. Use command id (guid) as Commit Id on NEventStore
  3. Mark your command as executed in your Command Handler | Pipeline Hook | Polling Client

NEventStore gives you idempotency on same AggregateId (streamid) + CommitId, so if you app crashes before the command is marked as processed and you replay your command, the resulting commits are automatically discarded by NES.

Andrea Balducci
  • 2,834
  • 1
  • 21
  • 18
  • Thanks, that would work if I had followed the "one command - one aggregate being changed" principle. Unfortuantely I have to confess: I didn't. So multiple commits would share the same commit-id (= command-id) - and therefore only the first commit would win... – David Rettenbacher Feb 16 '15 at 15:40
  • mmm.. please try. idempotency is handled at the stream level, so even if you have multiple aggregates handled per single commit-id should work. It's important to have the all the aggregates ids on the command side. – Andrea Balducci Feb 16 '15 at 15:46
  • Oh "per *stream* -level"... that is good... Special to my situation: I still have a problem as I store multiple aggregate-variants ("views of the same thing", i.e. "OrderableResource" and "StoreableResource" in 1 stream) in one aggregate-stream but I'm not committing everything in one commit but I *do* pass on the command-id to know what command initially caused the event... But this could be "easily" fixed by using this command-persistence consequently. – David Rettenbacher Feb 16 '15 at 15:57
  • @ initial questions: Any opinion on question 2 and 3? (I basically use NEventStore also as a message-bus) – David Rettenbacher Feb 16 '15 at 16:09
  • @Warappa you shouldn't use NEventStore as a message bus. Use a real bus with durability support – MikeSW Feb 16 '15 at 17:32
  • @Warappa you could but at the end you'll miss some features like deferred messages, routing, automatic retries. Command queue is good for debugging purposes but you need to have only one worker otherwhise the sequence is not guaranteed in replay (unless your worker can rebuild the execution order from the checkpoint + commandid). – Andrea Balducci Feb 17 '15 at 10:47
0

Afaik NEventStore is meant to be the storage for event sourcing i.e storing domain objects as a stream of events. Commands and sagas have nothing to do with it. It's your service bus which should take care of durability and saga management.

Personally, I treat the event store simply as a repository detail. The application service (command handler) will dispatch the generated events, after they've been persisted.

If the app crashes and the service bus is durable (not a memory one) then the event/command will be handled again automatically, because the service bus should detect if a message wasn't successfully handled. Of course, your message handlers should be idempotent for that reason.

MikeSW
  • 16,140
  • 3
  • 39
  • 53
  • I currently use the NEventStore as normal eventstore *and* for storing command-messages. I have heard about the idea somewhere and it seems good to me so far. By also storing commands I have the option to actually replay them on my dev system. With the mentioned idea of having a command-result-stream I can also get the error-reasons of failed commands (exception messages,...). So if I ignore commands at replay and have a way to know about the last executed command, I have a message queue where I can catch-up just after the last executed command (that I know of due to the command-result-stream) – David Rettenbacher Feb 16 '15 at 16:34
  • When the saga is implemented in the same way as the aggregate, ie it saves its state to an event stream, wouldn't it be practical to use the same persistence framework? I thought the `bucketId` was meant for that? – Thomas Eyde Aug 14 '15 at 11:18