I'm using haproxy to provide access to a redis cluster. But i'm experiencing issues when haproxy is starting since all servers are initially up, before the first checks can be run.
Is there a way to force haproxy to run the initial checks before accepting connections?
Here is my config:
global
max-spread-checks 1
defaults
mode tcp
timeout connect 4s
timeout server 10s
timeout client 60s
frontend ft_redis
mode tcp
bind *:6379
default_backend bk_redis
resolvers docker_resolver
nameserver dns 127.0.0.11:53
backend bk_redis
mode tcp
option tcp-check
tcp-check send info\ replication\r\n
tcp-check expect string role:master
tcp-check send QUIT\r\n
tcp-check expect string +OK
server redis-backend-1 redis_node1:6379 maxconn 1024 check inter 1s downinter 2s rise 1 fall 1 resolvers docker_resolver resolve-prefer ipv4 on-marked-down shutdown-sessions
server redis-backend-2 redis_node2:6379 maxconn 1024 check inter 1s downinter 2s rise 1 fall 1 resolvers docker_resolver resolve-prefer ipv4 on-marked-down shutdown-sessions
server redis-backend-3 redis_node3:6379 maxconn 1024 check inter 1s downinter 2s rise 1 fall 1 resolvers docker_resolver resolve-prefer ipv4 on-marked-down shutdown-sessions
It normally works fine and only connects to the current redis master.
But when starting/restarting haproxy I eventually get errors like "READONLY You can't write against a read only slave." because all servers are initially marked as up and the connection goes to a slave redis server.