3

I have a big table in SQL Server 2012, it contains about 34 million records. I have a batch program that insert/updates the table every 5 minutes, and I have a web application that reads from the table any time.

Every time the batch program is writing to the table, it is causing the reads to go really slow. I think it has something do to with the isolation level, but I'm not so into SQL Server so I don't know how I could fix it.

The best scenario for me would be to let the insert not block the reads. Is it possible to specify that in the query? Or do I have to change the read queries to pick up "dirty rows"?

The application is not critical dependent on "fresh" data so if I can force the DB to allow dirty reads it would be fine. But I don't know how I can achieve this, or if it's a better solution.

marc_s
  • 732,580
  • 175
  • 1,330
  • 1,459
Simon Edström
  • 6,461
  • 7
  • 32
  • 52
  • 1
    Inserts will always take exclusive locks on the newly inserted rows - and SQL Server has a limit at around 5'000 locks; if you exceed that limit, then the locks might be *escalated* to the table level. So question is: do you insert your data in batches (transactions) of less than 5000 rows at once? If not: try to break up your INSERTs into smaller transactions. – marc_s Nov 24 '12 at 17:22
  • There is no good reason why this should be happening, so there is more to this problem than what you have told us so far. To provide any useful help to us, you should post the table definition, including any key and index definitions. Also you should show us what kind of SQL Insert/Updates are being executed by the batch program, and what kind of SQL read statements are being used by the web application. Finally, give us some profile of the batch program: how many rows is it inserting and updating at a time and how long does it take to run? – RBarryYoung Nov 24 '12 at 20:44

2 Answers2

1

You certainly can't stop insert/update putting locks on, things would go very wrong very quickly. You could set isolation level on your read operations to say read uncommitted, or specify that as a hint in a select. They are hints though, if there's risk of losing data integrity, as opposed to simply returning dirty data (and there are many pitfalls to that), the dbms will quite rightly ignore them. There are other things you can do, but if you are serious about it, get a sql server dba in, this sort of thing is well past gifted amateur level, never mind novice.

Tony Hopkinson
  • 20,172
  • 3
  • 31
  • 39
  • "read uncommitted" is exactly equal to NOLOCK with exactly the same problems like queries aborting randomly. And the locking hints are not hints, but exact. – usr Nov 24 '12 at 19:27
  • "Read Uncommitted" is exactly what you do NOT want to do. Read Committed should just skip the uncommitted rows, unless the locks are getting escalated (the likely problem here), which needs to be addressed in its own right. – RBarryYoung Nov 24 '12 at 20:47
  • Did I say he should do it? No I said that's what you have to do if you want you to dodge as many locks as possible. Obviously the more you dodge the more likely the data you get back is going to be iffy. Read committed is more likely to be escalated, and there are scenarios where hints will be ignored they are not always exact. That plus the other caveats above is why I said get a DBA in. I'm well aware that I don't know enough about the subject and way less about the OP's needs to go making pronouncements about exaclty what he should do, perhaps you should take that on board yourselves... – Tony Hopkinson Nov 25 '12 at 18:18
1

There is an easy answer and a hard answer:

It is easy to solve if you can turn on and use snapshot isolation. That solves blocking and consistency problems for readers.

If you can't do that the problem might be hard to solve but there is not enough information here to answer.

So try to use snapshot isolation for readers and be sure to read up on it before using it.

usr
  • 168,620
  • 35
  • 240
  • 369