1

nginx version: nginx/1.9.3 gunicorn (version 19.7.1)

I have a small flask API running through gunicorn behind nginx. When I load test directly through gunicorn everything is working fine, however as soon as I point it to nginx, I get a very high number of TIME_WAIT sockets on the nginx server. Gunicorn box is fine. Here are the configs:

Gunicorn:

bind = '0.0.0.0:7030'
workers = 10
threads = 1
daemon = True
DEBUG = "True"

Nginx: relevant chunks:

upstream api {
    keepalive 32;
    server box1:7030;
    server box2:7030;
}


server {
    listen       7077;

    server_name  localhost;

    proxy_set_header   Host $host;
    proxy_set_header   X-Real-IP $remote_addr;
    proxy_set_header   X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header   X-Forwarded-Host $server_name;

    sendfile on;
    tcp_nopush on;
    tcp_nodelay on;
    keepalive_timeout  1;

    location / {
        proxy_next_upstream     error timeout http_500 http_404 http_502;
        proxy_pass              http://api;
        proxy_http_version 1.1;
        proxy_set_header Connection "";
    }
}

I've been testing and tweaking the configs, but I do not see any changes on the amount of sockets opened. I know that there are several OS settings that are recommended to tweak such as ip_local_port_range and tcp_tw_recycle / tcp_tw_reuse however I am working in one of those environments where I have to share the server and I can't tweak those without a long lead time.

Can I do anything on the nginx / gunicorn side? Note server that runs gunicorn is not showing a lot of open sockets at all, is Nginx expecting something from gunicorn/flask/api to keep connections open and reuse existing sockets?

nick_v1
  • 365
  • 2
  • 3
  • 11

0 Answers0