0

Lets say I'm implementing Domain Driven Design using C# and Entity Framework.

My code is structured such that each aggregate has its own dbcontext in EF to respect the principle of transactional boundaries around my aggregates.

Aggregate 1, InventoryAggregate, and Aggregate 2, OrderAggregate, are being updated by some business process, AddItemToOrder.

After OrderAggregate adds the item, it fires a domain event, ItemAddedToOrder that is listened to by InventoryAggregate, who then performs some business process, SubtractQuantityFromInventory.

InventoryAggregate fails to subtract the inventory and it fires a domain event, NotEnoughInventory, listened to by OrderAggregate.

OrderAggregate then attempts to remove the item from the order but fails.

Now there is an item in the order that should not be because we don't actually have enough inventory to sell the item.

How should this be handled?

drizzie
  • 3,351
  • 2
  • 27
  • 32

1 Answers1

1

What you are describing is a Process Manager. You probably need some sort of OrderProcess or QuoteProcess AR that handles the state for you process. If you need to perform some business validation such as checking inventory first then you need to have a process manager so that you only create the actual Order once you have established that it can indeed be submitted.

The rules around what to do with certain items may not be as simple as removing the item even though, in your case, it may be what you need to do. You may need to present the data to the user with one or more options. One may be to remove the item from before submitting the order where another may be to place it on a back order.

Eben Roux
  • 12,983
  • 2
  • 27
  • 48
  • It could be that I'm trying to solve an unrealistic problem, but I'm thinking this exchange is happening after validation has passed. Perhaps OrderAggregate fails because of a network failure, or disk write issue. Under transactional consistency, the entire transaction would rollback, but with eventual consistency between aggregates, we have to assume that the next aggregate will become consistent. – drizzie Mar 19 '16 at 18:32
  • 1
    Your problem probably is not all that unrealistic. One could approach it in different ways, though. For instance, if you are sure that a particular message "sent" from one AR to another is 100% correct and the only reason it may fail is technical *and* you have a guaranteed message delivery mechanism then you can safely **assume** that your other aggregate will eventually become consistent. However, it the message processing can fail due to a business condition and you need to compensate somehow then you are either going to need a process or contaminate you other AR with process semantics. – Eben Roux Mar 20 '16 at 05:08