0

On production servers I need sometime to delete a few 100,000s of rows . The query is optimized but deleting at once will cause servers freeze so I do it manually in chunks deleting for example 5000 rows each time. To allow the server to clear pending queries and to make the table locks less obvious to end users. I use MyISAM engine.

Isnt there a tool /command/ method that does long queries like this in low precedence or in chunks or in a way that doesnt get the server stuck in cases where its OK if it will take a long time to complete?

whats the best practices?

Niro
  • 1,401
  • 4
  • 20
  • 36
  • How do you do this know? Do you use some kind of loop for deleting rows, or deleting everything with 1 query? – Richard87 Jul 23 '15 at 16:03
  • I do a query with limit and match the limit so the server will return after for example 10 seconds. and repeat the query manually – Niro Jul 23 '15 at 16:09
  • Ahh, okay, I hoped you were using a loop, then **START TRANSACTION** and **COMMIT** speed things up considerably :) – Richard87 Jul 23 '15 at 16:11
  • 1
    If you don't want it to freeze, stop using MyISAM. Changes to a MyISAM table lock the entire table for every operation. InnoDB (and others but InnoDB being the most commonly used) use row-based locks for most operations. Also / alternatively, partition based on what you're matching for the delete and just drop the partition. Dropping a partition will momentarily lock the full table (as it's technically a change to the table structure, not a data change) but it should be far faster than deleting every row. –  Jul 23 '15 at 17:57

1 Answers1

1

You can try adding the LOW_PRIORITY keyword to your DELETE query and see how that does. For example DELETE LOW_PRIORITY FROM tablenamegoeshere ...

Check out https://dev.mysql.com/doc/refman/5.5/en/delete.html for more details.

sa289
  • 1,318
  • 2
  • 18
  • 44