2

I'm trying to keep data consistency in my Azure Sql Server database and implemented two scenarios:

  1. check constraint
  2. insert/update trigger

None of them work and I still able to reproduce a situation when my check is bypassed. The rule is simple - there couldn't be more than one active assignment for a user.

Tasks:
 - Id
 - UserId
 - Status ('Active', 'Done')

User
 - Id
 - ...

Approach #1 - Check Constraints

I've implemented a function to ensure data consistency and applied it as a check constraint

create function [dbo].[fnCheckUserConcurrentAssignment]
(
    @id nvarchar(128),
    @status nvarchar(50), -- not used but required to check constraint
)
returns bit
as
begin

    declare @result bit
    select @result = cast(case when (select count(t.Id) 
                                     from dbo.Tasks t
                                     where t.[UserId] = @id
                                         and t.[Status != 'Done') > 1
                               then 1 
                               else 0 
                          end as bit)

    return @result
end

alter table dbo.Tasks
    with check add constraint [CK_CheckUserConcurrentAssignment]
    check (dbo.fnCheckUserConcurrentAssignment([UserId], [Status]) = 0)

Approach #2 - Trigger

alter trigger dbo.[TR_CheckUserConcurrentAssignment]
on dbo.Tasks
for insert, update
as begin

    if(exists(select conflictId from
                      (select (select top 1 t.Id from dbo.Tasks t 
                               where t.UserId = i.UserId 
                                and o.[Status] != N'Done' 
                                and o.Id != i.Id) as conflictId 
                       from inserted i
                       where i.UserId is not null) conflicts
              where conflictId is not null))
    begin  
        raiserror ('Concurrent user assignment detected.', 16, 1);  
        rollback;
    end

end

If I create lots of assignments in parallel (regularly > 10) then some of them will be rejected by the constraint/trigger, other will be able to concurrently save UserId in the database. As a result by database data will be inconsistent.

I've verified both approaches in Management Studio and it prevents me from corrupting my data. I'm unable to assign multiple 'Active' tasks to a given user.

What is improtant to say, I'm using Entity Framework 6.x at my Backend to save my data (SaveChangesAsync) and every save action is executed in a separate transaction with default Transaction Isolation level ReadCommited

What could be wrong in my approaches and how to keep my data consistent?

Vladimir Baranov
  • 31,799
  • 5
  • 53
  • 90
Mando
  • 11,414
  • 17
  • 86
  • 167

1 Answers1

3

It is a classic race condition.

select @result = cast(case when (select count(t.Id) 
                                     from dbo.Tasks t
                                     where t.[UserId] = @id
                                         and t.[Status != 'Done') > 1
                               then 1 
                               else 0 
                          end as bit)

If two threads run the fnCheckUserConcurrentAssignment function at the same time, they will get the same count from Tasks. Then each thread will proceed with inserting the row and the final state of the database would violate your constraint.

If you want to keep your approach with the function in the CHECK constraint, or the trigger, you should make sure that your transaction isolation level is set to SERIALIZABLE. Or use query hints to lock the table. Or use sp_getapplock to serialise calls to your function/trigger.


In your case, the check is pretty simple, so it can be implemented without a trigger or function. I'd use a filtered unique index:

CREATE UNIQUE NONCLUSTERED INDEX [IX_UserID] ON [dbo].[Tasks]
(
    [UserID] ASC
) 
WHERE (Status = 'Active')

This unique index would guarantee that there is no two rows with Status = 'Active' which have the same UserID.


There is a similar question on dba.se How are my SQL Server constraints being bypassed? with more detailed explanations. They mention another possible solution - indexed views, which boils down again to unique index.

Community
  • 1
  • 1
Vladimir Baranov
  • 31,799
  • 5
  • 53
  • 90
  • that works and it's a surprise for me that check constraints are also subject to a race condition. For me, it was kinda unique index. In my case, I'm not obligated to use a function or trigger and my condition is that simple so I go with unique filtered index and it works. Now I still have exceptions in my code but can handle them gracefuly as long as data stays consistent all the time. Thanks! – Mando Dec 27 '16 at 08:40
  • Effectively, your check constraint with a function becomes non-atomic operation that consists of two (or more steps). Your function runs under the same transaction isolation level as the statement (`INSERT` or `UPDATE`) that triggered it. There is no "magic" in the engine around it that could prevent race condition. When it comes to unique (filtered) indexes, the engine has "magic" that makes sure that index remains unique even under high concurrent load. – Vladimir Baranov Dec 27 '16 at 10:45
  • If you want to research this topic further I'd recommend to search for `check constraint udf`. For example: http://dba.stackexchange.com/questions/12779/how-are-my-sql-server-constraints-being-bypassed http://sqlblog.com/blogs/alexander_kuznetsov/archive/2009/07/01/when-check-constraints-using-udfs-fail-for-multirow-updates.aspx http://sqlblog.com/blogs/alexander_kuznetsov/archive/2009/06/25/scalar-udfs-wrapped-in-check-constraints-are-very-slow-and-may-fail-for-multirow-updates.aspx https://www.brentozar.com/archive/2016/04/another-hidden-parallelism-killer-scalar-udfs-check-constraints/ – Vladimir Baranov Dec 27 '16 at 10:52
  • it seems that Azure SQL Database changes my `Read Committed` isolation level (which I explicitly chose in my code) to `Read Committed Snapshot` http://borishristov.com/blog/surprises-and-a-bug-with-sql-azure-db/ – Mando Dec 27 '16 at 13:29
  • I came here to say "filtered unique constraint" myself! – Ben Thul Dec 27 '16 at 19:13