4

I setup MySQL read replica that is read only. Both master and slave are running MySQL 5.6. The slave never gets written to directly, but I can't seem to keep it synced for more than an hour or two. After running for a bit, I constantly encounter this type of error:

Error_code: 1032; handler error HA_ERR_KEY_NOT_FOUND; the event's master log

I then have to go through the process of recreating the slave from a MySQL dump, but no matter what I do I get this error again. Does anyone have a clue why this might be happening?

user209180
  • 87
  • 1
  • 2
  • 6

1 Answers1

2

Depending on what engine(s) you are using, whether you are using multiple schemata, and the options you are using with mysqldump you might not be getting a consistent dump.

If you have two schemata, let's say one named development and another named production, mysqldump locks the tables separately for each schema. That means while you're backing up the development schema the production schema is still writable and being updated.

Now that you have your dump of the two schemata and start up replication, the two schemata are actually on different binlog positions. That means when you get Error_code 1032 you really don't have that key.

If all of the tables that you need to backup are InnoDB you should look into the --single-transaction option to mysqldump. If you have a mix of InnoDB and MyISAM then only the InnoDB tables are guaranteed to be consistent. The MyISAM tables will still be written to using this option.

If you have a mix of InnoDB and MyISAM or are pure MyISAM only, the best option (using mysqldump) is to use --lock-all-tables. That does exactly what it sounds like, nothing will be written to until the dump is complete. This has the major drawback that your application or website that depends on the database is locked too (since it can't write).

The best option IMO is to move everything to InnoDB if it isn't already and use Percona Xtrabackup. It will still work fine with MyISAM tables as well, but through a different tool than the InnoDB tables (cp or rsync). It still will need to lock the tables while copying the MyISAM tables but it does its' best to minimize that time. If you use the rsync method then the tool will first copy all of your MyISAM tables, then lock the tables, and then it copies any tables that have been updated since the first copy. The lock only needs to be held during that second rsync.

  • Thanks for your detailed response. All my tables are InnoDB and I actually have tried both ways: using mysqldump (with --single-transaction) and innobackupex. No matter which way I do it, I can get the replication working properly at first (even verifying this by writing to the master), but shortly thereafter replication breaks. Is there anything else that could cause this to happen so consistently? – user209180 Mar 26 '14 at 03:43
  • I can't think of anything less obvious. Every time I've run into that it's been either a) putting the wrong binlog file/position in CHANGE MASTER, b) circular (aka multimaster) replication with both servers being written to (which doesn't seem to be your problem) c) or an inconsistent copy. If you can spare the downtime, you may want to simply try to rsync all of the mysql files to the other box and build from that. MySQL *must* be shut down if you go that route. Did you make sure to use different server-ids on both servers? –  Mar 26 '14 at 04:13
  • Yes, the server-id is different. I am trying to create this read replica off a 3 node Percona cluster. I randomly chose one node from the cluster and built replication off of that for the entire database (including the mysql database). Could this possibly be causing issues? – user209180 Mar 26 '14 at 04:16
  • Another route I'd check: [log-slave-updates](http://dev.mysql.com/doc/refman/5.5/en/replication-options-slave.html#option_mysqld_log-slave-updates). If you don't have that set on the master (to the new slave) then writes to other cluster nodes will not be in the binlog. –  Mar 26 '14 at 04:44