0

I have three Mysql Nodes listed below:

Master Address: 192.168.1.77:3306
Slave1 Address: 192.168.1.76:3306
Slave2 Address: 192.168.1.69:3306

and after i installed mysql-proxy of version 0.8.3 on 192.168.1.67, and create my configuration below:

[mysql-proxy]
admin-username=proxy
admin-password=proxy
admin-lua-script=/local/software/mysql-proxy/lib/mysql-proxy/lua/admin.lua
proxy-read-only-backend-addresses = 192.168.1.76:3306,192.168.1.69:3306
proxy-backend-addresses=192.168.1.77:3306
proxy-lua-script=/local/software/mysql-proxy/share/doc/mysql-proxy/rw-splitting.lua
log-file=/local/software/mysql-proxy/log/mysql-proxy.log
plugin-dir=/local/software/mysql-proxy/lib/mysql-proxy/plugins
plugins=proxy,admin,debug,replicant
log-level=debug
keepalive=true

edited file: rw-splitting.lua

min_idle_connections = 1,
max_idle_connections = 2,

then start mysql-proxy like the way:

./bin/mysql-proxy --defaults-file=mysql-proxy.cnf

logon the proxy:

mysql -uproxy -ppassword -P4040 -h192.168.1.67

and when i execute select sql again and again to open different mysql-proxy 4040 window, but from log i found that all the select sql queries are sent to the same server for 76, however only if i shutdown the 76, then it will send the queries to slave 69. i don't know why load balance not to work, is there some place what i made a mistake? thank you in advance.

ywenbo
  • 3,051
  • 6
  • 31
  • 46

1 Answers1

1

rw-splitting.lua seems to leave some of the implementation as an exercise for the reader. There is a comment 'pick a random backend' but I see no implementation of it, or a round robin technique. The code seems to fill the backend servers from the top moving on to the next in the array when there are not idle connections.

If there are always idle connections at the master then the current implementation prefers to go there. After that it uses the first idle connection in the read only backend servers list. In this case 76 until you shut it down when it moved on to 69. I can't see why 77, the read/write backend is not being preferred. Possibly this is related to the number of idle connections available.

It would seem that seeking the lowest proxy.global.backends.connected_clients, the number of connections currently active on the backend, would be a good way to prioritize the backend used.

You should also take a look at the balance module lib/mysql-proxy/lua/proxy/balance.lua

ClearCrescendo
  • 1,145
  • 11
  • 22