Follow up of this other question.
I'm trying to implement pessimistic locking for a concurrency issue as I described in the question above (please, feel free to add to that one). But it's not working for me.
I do a very simple test: I have two seperate sites running that both increase a counter 500 times. I run them simultaneously. In the end, I expect that a certain column in my table has, you guess it, a value of 1000.
Here is the code. It's no production code of course, but test code or not, it should still work, right?
for (int i = 0; i < 500; i++)
{
var tx = this.userRepo.Session.BeginTransaction();
var user = this.userRepo.GetById(42);
user.Counter++;
userRepo.Save(user);
tx.Commit();
}
The GetById
method uses LockMode.Upgrade:
public T GetById(int id)
{
T obj = Session.Get<T>(id, LockMode.Upgrade);
return obj;
}
Now, using NHProfiler I see the following SQL statement:
SELECT Id FROM 'User' WHERE Id = 42 for update
but the result is a value of around 530, so that's about half of the updates lost due to concurrency. What am I doing wrong? I disabled second level cache in this test. Am I using the wrong lock mode? Should I specify an isoliation level? Anything else? Thanks in advance.
EDIT: FluentNhibernate config:
Fluently.Configure()
.Database(MySQLConfiguration.Standard.ConnectionString(connectionstring))
.Mappings(m => assemblyTypes.Select(t => t.Assembly).ToList().ForEach(a => m.FluentMappings.AddFromAssembly(a)))
.ExposeConfiguration(c => c.Properties.Add("hbm2ddl.keywords", "none"));