3

I am trying to figure out how to set up replication & failover in a scenario with 4 servers (2 per location) where any server may assume the Master role. My initial scenario is the following one:

  • 2 servers in location A (One Master, One Slave);
  • 2 servers in location B (Two Slaves).

For this I'm thinking on using the configuration Master-Master Active-Passive suggested on O'Reilly's "High Performance MySQL" on all of them so each one can become a Master when needed.

If the Master "dies" the other server from location A assumes the Master role whenever possible. It will always have a bigger priority then the servers on location B. A server on location B will only switch to Master if no server on location A is able to do so.

Since MySQL can't handle this automatically I need some other way to implement this. I've read already about heartbeat and Maatkit. Is this the way to go? Has anyone used this in a similar scenario? Is there some other way to go in order to achieve this? Any pointers about failout will be appreciated.

I want to keep this as simple as possible avoiding stuff such as DRDB. I'm not concerned about high availability just a way to switch roles automatically without too many hassle.

I'm using SuSe Enterprise 10 and MySQL 5.1.30-community.

Thanks in advance,

João

  • Four servers in two locations and you're not concerned with high availability? – Nate Sep 21 '10 at 15:14
  • I know it sounds strange but I'm not. Also this is not for a web application / site so I don't need it to be always available, some downtime is allowed. Currently switching masters is done manually and I'd like to implement an automatic way of doing this. – Joao Villa-Lobos Sep 21 '10 at 17:00

1 Answers1

0

location A: two HA MySQL proxy (serv1 ip 192.168.0.1,serv2 ip 192.168.0.2)

location B: two MySQL Data master-master (serv1 ip 192.168.0.3,serv2 ip 192.168.0.4)

serv1
server-id = 1
log_bin = /var/log/mysql/mysql-bin.log
max_binlog_size = 200M
binlog_ignore_db = test
binlog_ignore_db = mysql
master-host = 192.168.0.4
master-user = replication
master-password = you_pass
master-port = 3306

mysql -u root -p
Enter password: 
>grant replication slave on *.* to 'replication'@'192.168.0.4' identified by 'you_pass';
>flush privileges;
>quit;
/etc/init.d/mysql restart

serv2
server-id = 2
log_bin = /var/log/mysql/mysql-bin.log
max_binlog_size = 200M
binlog_ignore_db = test
binlog_ignore_db = mysql
master-host = 192.168.0.3
master-user = replication
master-password = you_pass
master-port = 3306

mysql -u root -p
Enter password: 
>grant replication slave on *.* to 'replication'@'192.168.0.3' identified by 'you_pass';
>flush privileges;
>quit;
/etc/init.d/mysql restart

mysql —u root —p
Enter password:
>show slave status \G

loc a serv1 and serv2 install mysql-proxy and heartbeat

mysql-proxy --proxy-backend-addresses=192.168.0.3 \
--proxy-backend-addresses=192.168.0.4 \
--proxy-address=:3306
—daemon

authkey

auth 2
2 sha1 your-strong-password

ha.cf

logfile /var/log/ha-log 
logfacility local0
keepalive 2 
deadtime 10 
initdead 120
bcast eth0
udpport 694
auto_failback on
node mysql-proxy1
node mysql-proxy2

mysql-proxy1 and mysql-proxy1 is uname -n on this host

haresources

mysql-proxy1 192.168.0.5

192.168.0.5 virtual mysql address

bindbn
  • 5,211
  • 2
  • 26
  • 24
  • First of all, thanks for your reply bindbn. There are a couple of things I didn't understand (quite new to the replication world). In the config files you are making 192.168.0.4 master of 0.1 & 192.168.0.3 master of 0.2 right? Why won't they share the same Master, e.g. 192.168.0.1? I always thought that all servers would point to the same Master. How/where is the priority to become a Master, if possible, for server2 @ location A being handled? In ha.cf (node mysql-proxy1 node mysql-proxy2)? Once again thanks for your help. – Joao Villa-Lobos Sep 21 '10 at 17:21