10

I have a table with more than 20 million rows, and when i do:

DELETE [Table] WHERE ID = ?

It takes over 40 seconds. The ID column is clustered.

Is this what you could expect? or is it possible to optimize this?

Tao
  • 13,457
  • 7
  • 65
  • 76
Erik Sundström
  • 997
  • 1
  • 12
  • 29

3 Answers3

25

In addition to the fine points JNK included in their answer, one particular killer I've seen is when you're deleting rows from the referenced table for one or more foreign key constraints, and the referencing column(s) in the referencing table(s) aren't indexed - you're forcing a table scan on each of those tables to occur before the delete can be accepted.

Damien_The_Unbeliever
  • 234,701
  • 27
  • 340
  • 448
  • 3
    The problem was that indexes on relating tables didn't have a index on the foreign key. That made the delete slow. Created indexes on all related tables for the foreign key column, and delete took 0 seconds. – Erik Sundström Jun 20 '11 at 13:04
  • Yup, adding an index made all the difference with our issue. But, ironically, so did simply running "UPDATE STATISTICS [tableName]". Worth a try. – Mike Gledhill Feb 07 '17 at 10:03
  • Thanks! I didn't spot that the table scan was the foreign table, not the table I was deleting from! – Luke Briner Apr 07 '22 at 16:23
10

This is going to depend on a lot of factors that you don't tell us about...

How many rows are deleted? More rows obviously means more time.

Are there other indexes? Every index needs to get updated, not just the clustered. If you are deleting through 10 indexes it will take about 10x as long (very roughly).

Is there other activity? If there are updates or inserts happening there are very likely waits and contention.

Also very generally speaking, the number of seconds an operation takes is HIGHLY dependent on your hardware setup. If you ran this on a desktop machine vs. a server with a high-performance array and 12 cores the expectations will be very different.

JNK
  • 63,321
  • 15
  • 122
  • 138
1

Also try deleting data in a batch. Example

set rowcount 10000
delete [table] where id = ? 
while @@rowcount >0
begin
delete [table] where id = ? 
end
Madhivanan
  • 13,470
  • 1
  • 24
  • 29
  • 1
    This improves concurrency by reducing lock time and contention for the table / indexes. It probably does not improve performance though. – Yuck Jun 20 '11 at 14:16
  • Support for setting rowcount for DML commands is going away: Using SET ROWCOUNT will not affect DELETE, INSERT, and UPDATE statements in a future release of SQL Server. Avoid using SET ROWCOUNT with DELETE, INSERT, and UPDATE statements in new development work, and plan to modify applications that currently use it. For a similar behavior, use the TOP syntax. https://learn.microsoft.com/en-us/sql/t-sql/statements/set-rowcount-transact-sql?view=sql-server-ver15 – Brandon Mar 08 '22 at 17:57