8

Suppose that I have a table with 10000000 record. What is difference between this two solution?

  1. delete data like :

    DELETE FROM MyTable
    
  2. delete all of data with a application row by row :

    DELETE FROM MyTable WHERE ID = @SelectedID
    

Is the first solution has best performance? what is the impact on log and performance?

Brian Tompsett - 汤莱恩
  • 5,753
  • 72
  • 57
  • 129
masoud ramezani
  • 22,228
  • 29
  • 98
  • 151

7 Answers7

23

If you need to restrict to what rows you need to delete and not do a complete delete, or you can't use TRUNCATE TABLE (e.g. the table is referenced by a FK constraint, or included in an indexed view), then you can do the delete in chunks:

DECLARE @RowsDeleted INTEGER
SET @RowsDeleted = 1

WHILE (@RowsDeleted > 0)
    BEGIN
        -- delete 10,000 rows a time
        DELETE TOP (10000) FROM MyTable [WHERE .....] -- WHERE is optional
        SET @RowsDeleted = @@ROWCOUNT
    END

Generally, TRUNCATE is the best way and I'd use that if possible. But it cannot be used in all scenarios. Also, note that TRUNCATE will reset the IDENTITY value for the table if there is one.

If you are using SQL 2000 or earlier, the TOP condition is not available, so you can use SET ROWCOUNT instead.

DECLARE @RowsDeleted INTEGER
SET @RowsDeleted = 1
SET ROWCOUNT 10000 -- delete 10,000 rows a time

WHILE (@RowsDeleted > 0)
    BEGIN
        DELETE FROM MyTable [WHERE .....] -- WHERE is optional
        SET @RowsDeleted = @@ROWCOUNT
    END
Patrick McDonald
  • 64,141
  • 14
  • 108
  • 120
AdaTheDev
  • 142,592
  • 28
  • 206
  • 200
  • 2
    You may want to get away from using SET ROWCOUNT in favour for SELECT/INSERT/UPDATE/DELETE TOP (N)... The reason? Have a look here: http://msdn.microsoft.com/en-us/library/ms143729.aspx and here: https://connect.microsoft.com/SQLServer/feedback/ViewFeedback.aspx?FeedbackID=282528 – Frank Kalis Jan 26 '10 at 11:04
  • Good point (assuming SQL 2005 or later) which is probably a safe bet. Updated my answer – AdaTheDev Jan 26 '10 at 11:17
14

If you have that many records in your table and you want to delete them all, you should consider truncate <table> instead of delete from <table>. It will be much faster, but be aware that it cannot activate a trigger.

See for more details (this case sql server 2000): http://msdn.microsoft.com/en-us/library/aa260621%28SQL.80%29.aspx

Deleting the table within the application row by row will end up in long long time, as your dbms can not optimize anything, as it doesn't know in advance, that you are going to delete everything.

casperOne
  • 73,706
  • 19
  • 184
  • 253
Nils Schmidt
  • 3,702
  • 6
  • 23
  • 28
  • 4
    You can rollback a truncate if you are still within the transaction scope you performed it, contrary to popular belief it is transactionalised - the deallocated IAM is not deleted until the commit so it can be rolled back by reinstating the IAM. – Andrew Jan 26 '10 at 10:00
  • Just read it up, and you are right, in sql server you may rollback a truncate. I had oracle in mind where it is not possible (according to the documentation). – Nils Schmidt Jan 26 '10 at 10:11
  • I can't use TRUNCATE because MyTable have foreign key also I need WHERE clause for filter data – masoud ramezani Jan 26 '10 at 10:21
  • 1
    If you need a WHERE clause, you can not use truncate. Guess we misunderstood your question, if you dont want to delete ALL the data in the table, see Andrew's first part for distinction between option one and two. – Nils Schmidt Jan 26 '10 at 10:27
3

The first has clearly better performance.

When you specify DELETE [MyTable] it will simply erase everything without doing checks for ID. The second one will waste time and disk operation to locate a respective record each time before deleting it.

It also gets worse because every time a record disappears from the middle of the table, the engine may want to condense data on disk, thus wasting time and work again.

Maybe a better idea would be to delete data based on clustered index columns in descending order. Then the table will basically be truncated from the end at every delete operation.

  • you say that: "Maybe a better idea would be to delete data based on clustered index columns in descending order. Then the table will basically be truncated from the end at every delete operation." can you describe more? – masoud ramezani Jan 26 '10 at 10:19
  • The database engine allocates data physically on the disk in the clustered disk order. If you were to delete records with the highest index values, that would basically lead to cutting off the tail of the file, without condensing the data which might occur if you removed something in the middle of the file. This is especially important when you adding records to try to have them appended to the end of the file. I can imagine removing records from the end would improve performance as well. –  Jan 26 '10 at 10:52
3

Option 1 will create a very large transaction and have a big impact on the log / performance, as well as escalating locks so that the table will be unavailable. Option 2 will be slower, although it will generate less impact on the log (assuming bulk / full mode)

If you want to get rid of all the data, Truncate Table MyTable would be faster than both, although it has no facility to filter rows, it does a meta data change at the back and basically drops the IAM on the floor for the table in question.

Andrew
  • 26,629
  • 5
  • 63
  • 86
2

The best performance for clearing a table would bring TRUNCATE TABLE MyTable. See http://msdn.microsoft.com/en-us/library/ms177570.aspx for more verbose explaination

Scoregraphic
  • 7,110
  • 4
  • 42
  • 64
1

Found this post on Microsoft TechNet.

Basically, it recommends:

  1. By using SELECT INTO, copy the data that you want to KEEP to an intermediate table;
  2. Truncate the source table;
  3. Copy back with INSERT INTO from intermediate table, the data to the source table;

..

BEGIN TRANSACTION

SELECT  *
   INTO    dbo.bigtable_intermediate
   FROM    dbo.bigtable
   WHERE   Id % 2 = 0;

   TRUNCATE TABLE dbo.bigtable;  

   SET IDENTITY_INSERT dbo.bigTable ON;
   INSERT INTO dbo.bigtable WITH (TABLOCK) (Id, c1, c2, c3)
   SELECT Id, c1, c2, c3 FROM dbo.bigtable_intermediate ORDER BY Id;
   SET IDENTITY_INSERT dbo.bigtable OFF;
ROLLBACK TRANSACTION
Bob Rivers
  • 5,261
  • 6
  • 47
  • 59
  • If you want to actually run it, replace ROLLBACK TRANSACTION with COMMIT TRANSACTION. Also you need to drop your intermediate table, or the next execution of this query will fail. SELECT INTO fails when the target table exists. So just add this before committing transaction: `DROP TABLE dbo.bigtable_intermediate;` – Lech Osiński Jan 21 '21 at 14:28
  • I think you also need to temporarily set the recovery model to BULK_LOGGED. `ALTER DATABASE YourDbName SET RECOVERY BULK_LOGGED; GO`. Check this link for details: https://learn.microsoft.com/en-us/sql/t-sql/statements/insert-transact-sql?view=sql-server-ver15#q-inserting-data-into-a-heap-with-minimal-logging – Lech Osiński Jan 21 '21 at 14:51
0

The first will delete all the data from the table and will have better performance that your second who will delete only data from a specific key.

Now if you have to delete all the data from the table and you don't rely on using rollback think of the use a truncate table

Patrick
  • 15,702
  • 1
  • 39
  • 39