1

We have two tables which receive 1 million+ insertions per minute. This table is heavily indexed, which also can’t be removed to support business requirements. Due to such high volume of insertions, we are seeing PAGELATCH_EX and PAGELATCH_SH. These locks further slowdown insertions.

I could solve that using approach described in Removing PAGELATCH with randomized ID instead of GUID

My real question is:

One of nonclustered indexes on table is based on record insertion date henceforth that is also inserting sequential data. We see PAGELATCH_EX and PAGELATCH_SH because page gets locked due to insertion of sequential data. But why am I not seeing PAGELATCH_EX and PAGELATCH_SH on non clustered index with sequential keys?

Dale K
  • 25,246
  • 15
  • 42
  • 71
UVData
  • 459
  • 2
  • 6
  • 13
  • It's a great question and one that I've wondered about recently with some issues I've seen in my environment. My suspicion is that the latch contention on the clustered index somewhat insulates the non-clustered index from the same. That is, if a given process is wedged trying to insert into the top-level b-tree, it's implicitly blocked from inserting into any non-clustered indices. – Ben Thul Dec 21 '22 at 16:23
  • What's the table definition including all indexes for the case that does **not** have the page latch problem? – Martin Smith Jan 02 '23 at 16:29
  • While you can micro tuning the queries for sure, the 1 million rows insertion per minute + business user queries stinks. Sounds like a mixed role for the db. Why don't you move the data out to a star schema periodically, where you can employ whatever indexes the business user needs, even compress or use column store for the facts for superb aggregation capabilities, if that suits the needs. – Avi Jan 02 '23 at 16:39
  • @Avi This is the long-term plan but right now its not possible. Also, question is more from a theoretical and understanding PoV. – UVData Jan 03 '23 at 15:54

0 Answers0