I think I might have my Unit of Work set up wrong in my architecture. Here is what I currently have (indented to show order):
HttpRequest.Begin()
UnitOfWork.Begin()
Session.BeginTransaction(System.Data.IsolationLevel.ReadCommitted);
Here, I call various services to perform crud using NHibernate. When I want to make a change to the database (update/save), I call this code:
using (var transaction = unitOfWork.Session.BeginTransaction())
{
try
{
// These are just generics
ret = (Key)unitOfWork.Session.Save(entity);
transaction.Commit();
rb.unitOfWork.Session.Clear();
}
catch
{
transaction.Rollback();
rb.unitOfWork.Session.Clear();
rb.unitOfWork.DiscardSession();
throw;
}
}
When the HttpRequest is over, I perform these steps:
UnitOfWork.Commit()
Transaction.Commit() // This is my sessions transaction from the begin above
I am running into issues with being able to rollback large batch processes. Because I am committing my transactions in my CRUD layer, as seen above, my transaction is no longer active and when I try to rollback in my UnitOfWork, it does nothing because of the transaction already being committed. The reason I'm committing my code in my CRUD layer is so I can persist my data as quickly as possible without locking the database for too long.
What is the best course of action to take with a situation like the one above? Do I just make special CRUD operation that don't commit for batch jobs and just handle the commit at the end of my job, or is my logic just flawed with my UnitOfWork and Session Per Request? Any suggestions?