0

I'm starting to work with SQL Server database and I'm having a hard time trying to understand Transaction Isolation Levels and how they lock data.

I'm trying to accomlish the following simple task:

  • Accept a pair of integers [ID, counter] in a SQL stored procedure
  • Determine whether ID exists in a certain table: SELCT COUNT(*) FROM MyTable WHERE Id = {idParam}
  • If the previous COUNT statement returns 0, insert this ID and counter: INSERT INTO MyTable(Id, Counter) VALUES({idParam}, {counterParam})
  • If the COUNT statement returns 1, update the existing record: UPDATE MyTable SET Counter = Counter + {counterParam} WHERE Id = {idParam}

Now, I understand I have to wrap this whole stored procedure in a transaction, and according to this MS article the appropriate isolation level would be SERIALIZABLE (it says: No other transactions can modify data that has been read by the current transaction until the current transaction completes). Please correct me if I'm wrong here.

Suppose I called the procedure with ID=1, so the first query woluld be SELCT COUNT(*) FROM MyTable WHERE SomeId=1 (1st transaction began). Then, immediately after this query was executed, the procedure is called with ID=2 (2nd transaction began).

What I fail to understand is how much data would be locked during the execution of my stored procedure in this case:

  • If the 1st query of the 1st transaction returns 0 records, does this mean that 1st transaction locks nothing and other transactions are able to INSERT ID=1 before 1st transaction tries it?
  • Or does the 1st transaction lock the whole table making the 2nd transaction wait even though those 2 transactions can never try to read/update the same row?
  • Or does 1st transaction somehow forbid anyone else to read/write only records with ID=1 until it is comleted?
geppertuo
  • 3
  • 2

1 Answers1

0

If your filter is on an index, that's what's going to get locked. So regardless of whether the row already exists or not, it's locked for the duration of the transaction. Take care, though - it's very easy to turn a row lock into something nastier, especially full table locks. And of course, it's easy to introduce deadlocks this way :)

However, I'd suggest a different approach. First, try to do an insert. If it works, you're done - if it doesn't, you know you can safely do an atomic update. Very fast, very cheap, very reliable :)

Luaan
  • 62,244
  • 7
  • 97
  • 116
  • Hmm, if I always call the proc with the same ID and try to insert a duplicate index and get an insert error every time - is this really going to be fast? – holdenmcgrohen May 22 '16 at 19:04
  • @holdenmcgrohen Yup, very fast. Unique index checking *must* be fast. But profiling is always your friend :) In any case, I expect the OP's case is going to be entirely dominated by the client->server I/O anyway. – Luaan May 22 '16 at 21:06