5

I was looking for a solution for graceful reload of haproxy. I have a nginx server running which passes requests to Haproxy and at times I reload the Haproxy configuration. But I am observing that at time of reloading all the existing connections are getting cut off and haproxy backend queue shows 0 requests ( got from socket stats of haproxy ) .

I am using the method mentioned in several blog posts and haproxy documentation about the issue :

Reload :

haproxy -f /etc/haproxy/haproxy.cfg -p /var/run/haproxy.pid -D -sf (</var/run/haproxy.pid)

Start :

haproxy -D  -f /etc/haproxy/haproxy.cfg -p /var/run/haproxy.pid 

It be grateful if someone could suggest any solution. Below is my haproxy config file

global
  maxconn 64000
  ulimit-n 200000
  log             127.0.0.1       local0
  log             127.0.0.1       local1 notice
  spread-checks 5
  stats socket /etc/haproxy/stats

defaults
  log global
  mode http
  balance roundrobin
  maxconn 64000
  option abortonclose
  option httpclose
  retries 3
  option redispatch
  timeout client 30000
  timeout connect 30000
  timeout server 30000
  stats enable
  stats uri     /haproxy?stats
  stats realm   Haproxy Statistics
  stats auth    haproxy:stats
  timeout check 5000
Khaled
  • 36,533
  • 8
  • 72
  • 99
vaibhav
  • 121
  • 2
  • 4

2 Answers2

4

The command you posted does not seem 100% correct to me. On my system, it reads:

haproxy -f /etc/haproxy/haproxy.cfg -p /var/run/haproxy.pid -D -sf $(cat /var/run/haproxy.pid)

You mistyped the value of last option -sf.

Khaled
  • 36,533
  • 8
  • 72
  • 99
1

This may be expected depending on your testing methodology. Although as mentioned above, you also seemed to be missing a $ near the end of your reload command?

When HAProxy does the 'reload' it starts a new HAProxy process, sends session data down a unix domain socket from the first process to the second process, the first process unbinds from the TCP ports it's listening on, and the second process binds. The first process then terminates. So they do not share memory at any stage and don't seem to sync file descriptors (from my own testing anyway). So the main point of the reload is that persistence tables (such as the cookie-based hash) are maintained so that the client will reconnect to the required backend server. However it should be graceful in the sense that connections are drained from the first process rather than instantly dropped.

If you watch your process table while performing the reload then you should observe the above, unless things have changed since I tested (a few months ago).

James Little
  • 1,159
  • 6
  • 8
  • Hey James, thanks for the reply. The missing $ was a typo I did while posting the question, I have it in my command and the restart goes through. I have a question though, when u say "it should be graceful in the sense that connections are drained from the first process rather than instantly killed" You mean that the requests should complete and not fail. I am sending requests from ab to nginx which sends request to haproxy & i see failed requests, which I think are the requests in the queue at reload. And from your reply it seems that queue going down to 0 is expected behavior.Am I right? – vaibhav Feb 19 '12 at 23:58
  • 1
    @vaibhav Well an *established* connection should not be dropped, but remember that with HTTP, a new connection is made for each request (unless you have keepalive enabled on client and browser). Do you see successful connections after a number of failed ones? (try running ab for longer if not). If so, this could be the period where the first process is unbinding from the port(s), and the second is binding - you can't have two processes bound to the same ports at the same time. – James Little Feb 26 '12 at 09:17
  • 1
    I meant client and *server* above. – James Little Feb 26 '12 at 10:41