0

I am trying to delete records from a table like

DELETE * FROM table WHERE x=1 LIMIT 200000;

When I am checking my bin logs it is showing breaking of this delete statement into further smaller chunks like:

sql(0):
  336 records
sql(1):
   336 records

This is creating a spike in my replication. Any ideas why bin-log breaking things in smaller chunks? for framework I am using laravel

Daljeet Singh
  • 704
  • 3
  • 7
  • 17

1 Answers1

0

MySQL DELETEs do not chunk, which is why chunking is done manually.

You're using row-based replication; statement-based replication would not incur this problem, although it might have other drawbacks - there's a discussion in the manual about pros and cons of RBR versus SBR.

Unless you're doing your chunking in Laravel as described here, there is no reason why you should get that behaviour. What might be happening is that you have very large lines and very small max_allowed_packet_size, so the server is sending the delete messages in chunks to fit them in the packet:

Row-based replication currently sends all columns and column values for updated rows from the master to the slave, including values of columns that were not actually changed by the update. This means that, when you are replicating large column values using row-based replication, you must take care to set max_allowed_packet large enough to accommodate the largest row in any table to be replicated, even if you are replicating updates only, or you are inserting only relatively small values.

On the other hand, RBR and deletion on a table (unless it has a primary index) is a bloodbath for replication; been there, done that, watched horrified as the slave went unresponsive. You want chunk deletion; probably the chunk size needs to be adjusted or a delay added.

For some methods on how to do so, see this answer.

LSerni
  • 55,617
  • 10
  • 65
  • 107