0

I have a WCF service which includes an operation marked with OperationBehavior(TransactionScopeRequired=true) and TransactionFlow(TransactionFlowOption.Allowed).

The client of this service is also part of the transaction (it has a database as well. This is a simplified example), so this involves distributed transactions and 2-phase-commit.

In order for my database operations to support 2-phase-commit, I've implemented the IEnlistmentNotification interface.
In the prepare phase I write the data to the DB with a transaction tag on it, and in the commit phase I remove the transaction tag from the data. Note that the commit phase includes database access, so it may be a bit slow.

The problem is that from what it seems and from what I've read, the Commit phase is run asynchronously, so for example, the following sequential scenario may not work:

1) Transaction 1: Client inserts A
2) Transaction 2: Client inserts B which relies on A (server looks up A, extracts information from it and uses it to insert B)

Since the commit phase of transaction 1 may not have yet finished on the server side, transaction 2 may not find A (since it's still marked with 'transaction 1' tag).
These 2 transactions may quickly occur one after the other, so it's a matter of race condition.
The way I noticed it is when I enabled logging of my DB driver, then the commit became a bit slower and an error occurred on the 2nd transaction. If I disabled the logging then it succeeded. But even if I disable the logging, it's still a matter of race condition, and I wouldn't rely on it in a production environment.

What would be the best way to tackle this issue?

Metheny
  • 1,112
  • 1
  • 11
  • 23
  • Do you have to use 2 phase commit? Maybe an approach using SAGA is better? – jason.kaisersmith Jun 21 '19 at 05:58
  • Yes, please assume 2 phase commit is required. As said, this is a simplified description of the system. This is an existing system which already works with 2PC. In fact there are multiple servers which are triggered in parallel. Up until now each server had an SQL Server instance (which supports 2PC out of the box), and now we are replacing some of the servers' DB to a different DB which needs this manual integration of 2PC. Those with SQL Server are in production and cannot be changed. So changing the transaction mechanism at this point is out of the question from a product point of view. – Metheny Jun 21 '19 at 06:19

1 Answers1

0

I already faced that problem, and we managed it with an orchestrated saga. All you need to implement it is a message queue (apache kafka, rabbit MQ...). Then each client has to send a simple data as a notification of the executed event like {"event": "projectAdded"...}.
And you'll need a coordinator which is going to subscribe to that event and send a new event to the next client like {"event": "sendNotification"..} who will be listening to that event to start working.
For the consistence, you can even send some events like {"error": "projectAdditionFailed"...} so to rollback and compensate the executed events

JackU
  • 1,406
  • 3
  • 15
  • 43
mohamed-mhiri
  • 202
  • 3
  • 22