0

I'm experiencing a deadlock on an old version of one of our software, on a single request of the same stored procedure that was ran pretty much at the same time by two different users.

Request 1 :

UPDATE [UserExport].[Stat] WITH (ROWLOCK)
    SET BOKey = CustomerID + '|' + ISNULL(Period, '') + '|' + ISNULL(FECode, '')
    WHERE CompanyCode = '001' AND BOKey IS NULL AND UserCode = '53'

Request 2 :

UPDATE [UserExport].[Stat] WITH (ROWLOCK)
    SET BOKey = CustomerID + '|' + ISNULL(Period, '') + '|' + ISNULL(FECode, '')
    WHERE CompanyCode = '001' AND BOKey IS NULL AND UserCode = '87'

SQL Profiler output (ridlock part only)

<ridlock fileid="3" pageid="55475" dbid="7" objectname="PRTE.UserExport.Stat" id="lock1b722b980" mode="X" associatedObjectId="72057594355384320">
  <owner-list>
     <owner id="process4a9708" mode="X"/>
  </owner-list>
  <waiter-list>
     <waiter id="process55d4c8" mode="U" requestType="wait"/>
  </waiter-list>
</ridlock>
<ridlock fileid="3" pageid="155604" dbid="7" objectname="PRTE.UserExport.Stat" id="lock1b7999580" mode="X" associatedObjectId="72057594355384320">
  <owner-list>
     <owner id="process55d4c8" mode="X"/>
  </owner-list>
  <waiter-list>
     <waiter id="process4a9708" mode="U" requestType="wait"/>
  </waiter-list>
</ridlock>

We are running this on SQL Server 2008 R2, the UserExport scheme is used as a mirror of the dbo one to store data to be exported, and none of its tables have indexes nor triggers. I know the design is bad, actually it was rewritten in future versions as it should be, however I still don't understand how the deadlock can occurs on a RID since there is no way both requests were updating the same line.

I have read here that an UPDATE statement with ROWLOCK hint may lock more rows than it actually updates when running on a table with no index because it also lock other read rows, but I can't find any further explanation or way to deal with it by only updating the request.

Does it mean that an UPDATE on a table with no index is always that risky ?

Or maybe I misunderstand something and there is a way to guarantee only updated rows are locked ?

Val
  • 3
  • 2
  • I suspect the assessment that the provided two update statements caused each other's deadlock. There might be much more going on in your S/P. – YoYo May 12 '18 at 16:09
  • I'm not sure about what you mean. The deadlock is not a guess, it's a fact and the Profiler explicitely tell me that it's between these two requests. – Val May 14 '18 at 15:10

1 Answers1

0

Your notion about a locking more than just the affected row seems to be correct. Another post describes the same: https://stackoverflow.com/a/2335599/744133.

In SQL Server, the lock hints are applied to the objects scanned, not matched.

Thats an important statement to make. With an index in place you would avoid scanning through the entire table, but depending on what your index covers, you might still end up locking multiple rows. Example if you can have the same UserCode in different CompanyCode, you might end up locking them all if you put your index only on UserCode.

Rather having to depend on indexes, they might not make sense to have them on any combination of fields you want to update on, I would suggest instead alternative approaches: You can try to implement a two step approach:

  1. make sure your table has an indexed PK. Ideally that would be simply a unique sequential id assigned to each record.
  2. for your update, first run a select criterium to find the set of rows you want to affect. That will give you a set of uniqe keys.
  3. run the UPDATE WITH (ROWLOCK) using the PK. Simplest is to just write a programatic loop.

Oracle has the notion of a SELECT FOR UPDATE, which does actually kind of the same as what I outlined above for tsql.

Another approach is to not depend on rowlocks. Basicaly implement an optimistic concurrent update.

This idea is outlined here:

https://learn.microsoft.com/en-us/dotnet/framework/data/adonet/optimistic-concurrency

Locks are usually expensive operations, and hard to handle. Concequently they are not as scalable in a concurrent environment. Optimistic concurrent updates are much more lightweight and scalable, however they require additional coding to implement.

On way to implement such model is by introducing a version column. You update by also incrementing the version, but also making sure to put the old version as part of the where condition. If update fails, means we had a concurrent update. You can also try using a timstamp, or by simply checking the old value of the column you are trying to update.

YoYo
  • 9,157
  • 8
  • 57
  • 74
  • 1
    You may want to explore Kalen Delaney's book "SQL Server Concurrency: Locking, Blocking, and Row Versioning" which is a clear and authoritative explanation of locking in SQL Server. YoYo is encouraging you to look at optimistic updates but that (unless you code very carefully) in lost updates. I would stick with MS default locking level of Read Committed until you are very comfortable with optimistic locking. – benjamin moskovits May 14 '18 at 15:54
  • Yes that is also an important note to make. This does result in lost updates, however they are not un-noticed (if coded properly). Also in a GUI - if the two updates would end up successful (using locking), one of the users might not notice that their changes took not place anyhow and were overwritten by another user. A failed update makes it plain and visible with a message like - "Failed: Conccurrent edit by another user". The expectation would be that we update the state we are seeing in our GUI. – YoYo May 14 '18 at 16:04
  • I eventually managed to handle it with the try/catch/retry pattern, since it's a really old application and I can't spend much time on redesigning it but thanks guys, it helped me to better understand what's going on and what are possible approaches. – Val May 15 '18 at 22:06