2

I have a SQL Server 2012 table that will contain 2.5 million rows at any one time. Items are always being written into the table, but the oldest rows in the table get truncated at the end of each day during a maintenance window.

I have .NET-based reporting dashboards that usually report against summary tables though on the odd occasion it does need to fetch a few rows from this table - making use of the indexes set.

When it does report against this table, it can prevent new rows being written to this table for up to 1 minute, which is very bad for the product.

As it is a reporting platform and the rows in this table never get updated (only inserted - think Twitter streaming but for a different kind of data) it isn't always necessary to wait for a gap in the transactions that cause rows to get inserted into this table.

When it comes to selecting data for reporting, would it be wise to use a SNAPSHOT isolation level within a transaction to select the data, or NOLOCK/READ UNCOMITTED? Would creating a SQLTransaction around the select statement cause the insert to block still? At the moment I am not wrapping my SQLCommand instance in a transaction, though I realise this will still cause locking regardless.

Ideally I'd like an outcome where the writes are never blocked, and the dashboards are as responsive as possible. What is my best play?

tommed
  • 1,521
  • 2
  • 19
  • 33
  • NOLOCK is not just about updating rows. You said this is for reporting, do these reports need to be accurate? If missing and/or duplicated information is acceptable you might be ok. You will never be in a situation where writes are never blocked. There may or may not be blocks at any level when you try to write data, your write process will have to wait until the lock is released. – Sean Lange Jul 10 '14 at 19:16
  • When you say accurate - does it matter that a few rows are missing? No - but does it matter that the numbers are garbage? Absolutely! Would running a SELECT statement within a SNAPSHOT transaction prevent dodgey but not always up-to-date results - if it would and would increase performance then it sounds ideal...? – tommed Jul 10 '14 at 19:24
  • Isolation levels carry with them some of their own overhead. Snapshot isolation is not going to make your queries faster per se. When using NOLOCK the values won't be garbage but you can and will occasionally have some missing rows. You will also sometimes get duplicate rows. Many times in reporting this not a big deal because you are likely getting aggregate data and a small percentage off is not a big deal. There are lots of articles out there that explain this behavior. This one has a working example of nolock in progress. http://www.jasonstrate.com/2012/06/the-side-effect-of-nolock/ – Sean Lange Jul 10 '14 at 19:36
  • 1
    snapshot isolation sounds like it might be a great fit based on what you are explaining. Do some digging on it and make the choice that best fits your needs but I would start there. – Sean Lange Jul 10 '14 at 19:37

1 Answers1

0

Post your query

In theory a select should not be blocking inserts.

By default a select only takes a shared lock.
Shared locks are acquired during read operations automatically and prevent the user from modifying data.

This should not block inserts to otherTable or joinTable

select otherTable.*, joinTable.*  
  from otherTable 
  join joinTable 
    on otherTable.jionID = joinTable.ID

But it does have the overhead of acquiring a read lock (it does not know you don't update).
But if it is only fetching a few rows from joinTable then it should only be taking a few shared locks.
Post your query, query plan, and table definitions.
I suspect you have some weird stuff going on where it is taking a lot more locks than it needs.
It may be taking lock on each row or it may be escalating to page lock or table lock.

And look at the inserts. Is it taking some crazy locks it does not need to.

paparazzo
  • 44,497
  • 23
  • 105
  • 176