3

Our Java application cleans up the data from log table through a cron job. Here is the query which is executed for clean up:

DELETE FROM AU_TRANSACTIONDATA 
WHERE AU_ACTIVITYENDTIME != 0 
  AND AU_ACTIVITYENDTIME <= 1464570001151

We have an index on AU_ACTIVITYENDTIME column of this table:

CREATE INDEX [IX_AU_TRANSDATA_ENDTIME] 
    ON [AU_TRANSACTIONDATA]([AU_ACTIVITYENDTIME]) ON [PRIMARY];

Our application dumps transaction data (generated on execution of APIs in our application) into this table. Here is the INSERT query:

INSERT INTO AU_TRANSACTIONDATA (AU_TRANSACTIONID, AU_TRANSACTIONNAME, AU_TRANSACTIONDOMAINID, AU_ACTIVITYNAME, AU_ACTIVITYID, AU_ACTIVITYPID, AU_ACTIVITYTYPE, AU_ACTIVITYSTARTTIME, AU_ACTIVITYENDTIME, AU_ACTIVITYSTATUS, AU_CORRECTDATA, AU_ERRORDATA, AU_USERID, AU_GROUPID, AU_NODENAME, AU_TRANSACTIONDESCRIPTION, AU_SEQUENCEID, AU_TRANSSEQUENCEID) 
VALUES (@P0, @P1, @P2, @P3, @P4, @P5, @P6, @P7, @P8, @P9, @P10, @P11, @P12, @P13, @P14, @P15, @P16, @P17)

We are facing issue when log clean up (DELETE query) and data processing (INSERT query) happens simultaneously. We had around 1.5 million records for clean up, so delete query took some time to clean up the log records. But during that time, processing was blocked and no INSERT queries went through.

Here are the logs:

SPID at Head of Blocking Chain:

SPID [ecid]: 3524 [0]

Blocked by SPID: 0

Client Machine: xxxxx

Client Process ID: 123

Application: jTDS

Login Name: xxxx

Last Batch: 5/30/2016 9:06:56 PM

Wait Type:

Wait Resource:

Wait Time: 00:00:00

Database:

Command Text:

DELETE FROM AU_TRANSACTIONDATA 
WHERE AU_ACTIVITYENDTIME != 0 AND AU_ACTIVITYENDTIME <= 1464570001151
----------------------------------------------------------------------

Blocked SPID:

SPID [ecid]: 211 [0]

Client Machine: xxxxx

Client Process ID: 123

Application: jTDS

Login Name: xxxxx

Last Batch: 5/30/2016 9:06:56 PM

Wait Type: LCK_M_IX

Wait Resource: AU_TRANSACTIONDATA.IX_AU_TRANSDATA_ENDTIME

Wait Time: 00:00:24

Database: xxxx

Command Text:

INSERT INTO AU_TRANSACTIONDATA (AU_TRANSACTIONID, AU_TRANSACTIONNAME, AU_TRANSACTIONDOMAINID, AU_ACTIVITYNAME, AU_ACTIVITYID, AU_ACTIVITYPID, AU_ACTIVITYTYPE, AU_ACTIVITYSTARTTIME, AU_ACTIVITYENDTIME, AU_ACTIVITYSTATUS, AU_CORRECTDATA, AU_ERRORDATA, AU_USERID, AU_GROUPID, AU_NODENAME, AU_TRANSACTIONDESCRIPTION, AU_SEQUENCEID, AU_TRANSSEQUENCEID) VALUES ( @P0 , @P1 , @P2 , @P3 , @P4 , @P5 , @P6 , @P7 , @P8 , @P9 , @P10 , @P11 , @P12 , @P13 , @P14 , @P15 , @P16 , @P17 )

Logs shows that INSERT statements are waiting for resource AU_TRANSACTIONDATA.IX_AU_TRANSDATA_ENDTIME (which denotes an index) with wait type as LCK_M_IX. In some logs, we have seen wait resource to be AU_TRANSACTIONDATA which is table itself.

Can you please explain the following:

  1. Why INSERT queries are getting blocked when DELETE query for log clean up is executing?
  2. What is meant by LCK_M_IX wait type and how it can be resolved?
  3. Is DELETE query locking the whole table or places exclusive (X) lock on index?

I am not familiar with wait and locking strategies in SQL Server, so any help in this regard shall be appreciated.

EDIT: We already tried to delete the data in chunks i.e. 10000 rows at a time but it didn't help the cause. Here is the new DELETE query:

SET ROWCOUNT 10000
delete_more:
     DELETE FROM AU_TRANSACTIONDATA 
     WHERE AU_ACTIVITYENDTIME != 0 AND AU_ACTIVITYENDTIME <= 1464570001151

IF @@ROWCOUNT > 0 GOTO delete_more
SET ROWCOUNT 0
marc_s
  • 732,580
  • 175
  • 1,330
  • 1,459
Aman
  • 1,170
  • 3
  • 15
  • 29
  • 3
    If any given session tries to do something to the database that requires locks, and within a single transaction, you're doing operations like `DELETE` or `UPDATE` to more than **5000 rows**, SQL Server will do a **lock escalation** - instead of handling 5000+ individual row-level locks, it will **exclusively lock** the **entire table**, so no other operations - not even `SELECT` queries - are possible anymore until that transaction has been committed (or rolled back). – marc_s Jun 02 '16 at 20:41
  • Thanks Marc,i thought inserts will not be blocked(since that row doesnt even exist in default isolation level.But my tests now proved other wise – TheGameiswar Jun 02 '16 at 20:53
  • Thanks Marc for your response, so if we delete data in chunks with row count < 5000, will it maintain row level lock and won't **exclusively lock** the **entire table**? – Aman Jun 02 '16 at 21:02
  • Yes, exactly - below 5000 rows for a single transaction, SQL Server will handle individual row-level locks, and the deletion and insert should be able to work in parallel at the same time – marc_s Jun 03 '16 at 04:41
  • 1
    @TheGameiswar: if the *entire table* is exclusively locked (due to a `DELETE` over more than 5000 rows), then nothing goes anymore - no `SELECT`, no `INSERT` - nothing. – marc_s Jun 03 '16 at 04:42

1 Answers1

8

If any given session tries to do something to the database that requires exclusive locks (like INSERT, UPDATE, DELETE), and within a single transaction, you're doing operations to more than 5000 rows, SQL Server will do a lock escalation.

Instead of handling 5000+ individual row-level locks, it will exclusively lock the entire table, so no other operations - not even SELECT queries - are possible anymore until that transaction has been committed (or rolled back).

marc_s
  • 732,580
  • 175
  • 1,330
  • 1,459
  • It helped, thank you! What happens if we turn auto commit OFF during DELETE operation? We are deleting around 2M records (chunk of 4500 records) from table and if auto commit mode is turned OFF, then we see some INSERT operations getting blocked and SQL database server becoming unresponsive at times! But if auto commit mode is turned ON, then everything works like charm. Any help shall be appreciated. – Aman Jun 07 '16 at 20:12