1

I'm having issues with my haproxy servers rejecting new connections (or timing them out) after a certain threshold. The proxy servers are AWS c5.large EC2's with 2 cpus and 4GB of ram. The same configuration is used for both connection types on our site, we have one for websocket connections which typically have between 2K-4K concurrent connections and a request rate of about 10/s. The other is for normal web traffic with nginx as the backend with about 400-500 concurrent connections and a request rate of about 100-150/s. Typical cpu usage for both is about 3-5% on the haproxy process, with 2-3% of the memory used for the websocket proxy (40-60MB) and 1-3% of the memory used for the web proxy (30-40MB).

Per the attached config, the cpus are mapped across both cpus, with one process and two threads running. Both types of traffic are typically 95% (or higher) SSL traffic. I've watched the proxy info using watch -n 1 'echo "show info" | socat unix:/run/haproxy/admin.sock -' to see if I'm hitting any of my limits, which does not seem to be the case.

During high traffic time, and when we start to see issues, is when our websocket concurrent connections gets up to about 5K and web requests rate gets up to 400 requests/s. I mention both servers here because I know the config can handle the high concurrent connections and request rate, but I'm missing some other resource limit being reached. Under normal conditions everything works just fine; however, the issues we see are ERR_CONNECTION_TIMED_OUT (from chrome) type errors. Never do I see any 502 errors. Nor do I see any other process use more cpu or memory on the server. I'm also attaching some other possibly relevant configs, such as setting my limits and sysctl settings.

Any ideas what I might be missing? Am I reading top and ps aux | grep haproxy wrong and seeing the wrong cpu/mem usage? Am I missing some tcp connection limit? The backend servers (nginx/websocket) are being worked, but never seem to be taxed. We've load tested these with much more connections and traffic and are limited by the proxy long before we limit the backend servers.

Thanks a lot.

haproxy.cfg:

global
    ulimit-n 300057
    quiet
    maxconn 150000
    maxconnrate 1000
    nbproc 1
    nbthread 2
    cpu-map auto:1/1-2 0-1

    daemon
    stats socket /run/haproxy/admin.sock mode 600 level admin
    stats timeout 2m
    log 127.0.0.1:514 local0
    ca-base /etc/ssl/certs
    crt-base /etc/ssl/private
    ssl-default-bind-options no-sslv3 no-tlsv10
    ssl-default-bind-ciphers kEECDH+aRSA+AES:kRSA+AES:+AES256:RC4-SHA:!kEDH:!LOW:!EXP:!MD5:!aNULL:!eNULL:!RC4

defaults
    maxconn 150000
    mode http
    log global
    option forwardfor
    timeout client 30s
    timeout server 120s
    timeout connect 10s
    timeout queue 60s
    timeout http-request 20s

frontend default_proxy
    option httplog
    bind :80
    bind :443 ssl crt /etc/haproxy/ssl.pem
    ... acl stuff which may route to a different backend
    ... acl for websocket traffic
    use_backend websocket if websocket_acl
    default_backend default_web

backend default_web
    log global
    option httpclose
    option http-server-close
    option checkcache
    balance roundrobin
    option httpchk HEAD /index.php HTTP/1.1\r\nHost:website.com
    server web1 192.168.1.2:80 check inter 6000 weight 1
    server web2 192.168.1.3:80 check inter 6000 weight 1

backend websocket
    #   no option checkcache
    option httpclose
    option http-server-close
    balance roundrobin
    server websocket-1 192.168.1.4:80 check inter 6000 weight 1
    server websocket-2 192.168.1.5:80 check inter 6000 weight 1

Output from haproxy -vv:

HA-Proxy version 1.8.23-1ppa1~xenial 2019/11/26
Copyright 2000-2019 Willy Tarreau <willy@haproxy.org>

Build options :
  TARGET  = linux2628
  CPU     = generic
  CC      = gcc
  CFLAGS  = -O2 -g -O2 -fPIE -fstack-protector-strong -Wformat -    Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fno-strict-aliasing -Wdeclaration-after-statement -fwrapv -Wno-unused-label
OPTIONS = USE_GETADDRINFO=1 USE_ZLIB=1 USE_REGPARM=1 USE_OPENSSL=1 USE_LUA=1 USE_SYSTEMD=1 USE_PCRE2=1 USE_PCRE2_JIT=1 USE_NS=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with OpenSSL version : OpenSSL 1.0.2g  1 Mar 2016
Running on OpenSSL version : OpenSSL 1.0.2g  1 Mar 2016
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2
Built with Lua version : Lua 5.3.1
Built with transparent proxy support using: IP_TRANSPARENT         IPV6_TRANSPARENT IP_FREEBIND
Encrypted password support via crypt(3): yes
Built with multi-threading support.
Built with PCRE2 version : 10.21 2016-01-12
PCRE2 library supports JIT : yes
Built with zlib version : 1.2.8
Running on zlib version : 1.2.8
Compression algorithms supported : identity("identity"), deflate("deflate"), raw-deflate("deflate"), gzip("gzip")
Built with network namespace support.

Available polling systems :
      epoll : pref=300,  test result OK
       poll : pref=200,  test result OK
     select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available filters :
    [SPOE] spoe
    [COMP] compression
    [TRACE] trace

limits.conf:

* soft nofile 120000
* soft nproc 120000

sysctl.conf:

net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_syncookies=1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.ip_local_port_range = 1024 65023
net.ipv4.tcp_max_syn_backlog = 50000
net.ipv4.tcp_max_tw_buckets = 400000
net.ipv4.tcp_max_orphans = 60000
net.ipv4.tcp_synack_retries = 3
net.core.somaxconn = 50000
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
net.ipv4.tcp_no_metrics_save = 1
net.ipv4.tcp_moderate_rcvbuf = 1
net.core.netdev_max_backlog = 50000
fs.epoll.max_user_instances = 10000

Typical with load with 330 concurrent connections and 80 req/s ps aux | grep haproxy output:

root      8122  4.5  1.2 159052 46200 ?        Ssl  Jan28  40:56 /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -sf 29790
root     12893  0.0  0.3  49720 12832 ?        Ss   Jan21   0:00 /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -sf 29790

and the OS is Ubuntu 16.04.

Dbl0McJim
  • 91
  • 1
  • 5

1 Answers1

0

Turns out the answer was staring at me in the face the whole time. I had set the maxconnrate to 1,000. However, show info was showing me a lower connection rate of between 10-15, so I didn't think I was hitting that limit. I was only able to sustain a maximum of 500 requests/s (confirmed by my backend servers), with each request requiring one connection to the client, and a second to the backend. Thus, I was using 1,000 connections per second.

I removed this limit and I was able to sustain a higher connection rate.

Dbl0McJim
  • 91
  • 1
  • 5