Context
I'm trying to implement a deployment script which implements the following idea:
- Start queuing new incoming requests, while waiting for current requests to be finished
- Wait for all current requests to finish (I think this is called "draining")
- Run app-specific deployment script
- Process all requests that were queued in step (1) and get haproxy back to normal. No incoming connections should be dropped by haproxy. If the client times out, that is acceptable.
Question
Given this context, I can find a number of ways to implement this in the haproxy docs:
set server mybackend/myserver state drain
followed byset server mybackend/myserver state ready
set maxconn frontend myfrontend 0
followed byset maxconn frontend myfrontend 1000
set maxconn backend mybackend/myserver 0
followed byset maxconn backend mybackend/myserver 1000
Which of these is the correct way of implementing what I'm trying to implement?
More context
This is probably related to https://serverfault.com/a/450983/117598 , but the following from haproxy docs is causing me to re-confirm:
Sets the maximum per-process number of concurrent connections to . It is equivalent to the command-line argument "-n". Proxies will stop accepting connections when this limit is reached. [..]
vs another conflicting snippet:
The "maxconn" parameter specifies the maximal number of concurrent connections that will be sent to this server. If the number of incoming concurrent requests goes higher than this value, they will be queued, waiting for a connection to be released. [..]