5

We are connecting to a system where in 4 ports are exposed to serve the grpc requests. Used nginx as load balancer to forward the 4 client grpc requests with below configuration:

user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;

events {
        worker_connections 768;
        # multi_accept on;
}

http {
    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent"';

    map $http_upgrade $connection_upgrade {
        default upgrade;
        ''      close;
    }
    upstream backend{
        #least_conn;
        server localhost:9000 weight=1 max_conns=1;
        server localhost:9001 weight=1 max_conns=1;
        server localhost:9002 weight=1 max_conns=1;
        server localhost:9003 weight=1 max_conns=1;
        }

    server {
        listen 80 http2;

        access_log /tmp/access.log main;
        error_log /tmp/error.log error;

        proxy_buffering off;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Scheme $scheme;
        proxy_set_header Host $http_host;

        location / {
                #eepalive_timeout 0;
                grpc_pass grpc://backend;
                grpc_pass_header userid;
                grpc_pass_header transid;
        }
    }
}

It is observed that few times all client 4 requests goes to all the 4 ports but sometimes (say 30%) to only 2 ports/3ports. Seems like default round robin is not happening with the NGINX as expected. We tried all possibilities like max_conns, least_conn, weight but no luck.

Seems like I have encountered the issue as in below links:

https://serverfault.com/questions/895116/nginx-round-robin-nor-exactly-round-robin
https://stackoverflow.com/questions/40859396/how-to-test-load-balancing-in-nginx

When i was going through Quora found that "fair" module in nginx would resolve this.

    "The Nginx fair proxy balancer enhances the standard round-robin load 
    balancer provided with Nginx so that it will track busy back end servers (e.g. Thin, Ebb, Mongrel) and balance the load to non-busy server processes. "

https://www.quora.com/What-is-the-best-way-to-get-Nginx-to-do-smart-load-balancing

I tried using "fair" module with NGINX from source but encountered so many issues. I could not start the NGINX itself. Can anyone help with this issue?

Mahesh
  • 171
  • 3
  • 8

2 Answers2

4

We got the answer !!!! Just changed "worker_processes auto;" to "worker_processes 1;" Now, it is working fine.

All the requests are load balanced properly. Here we felt if you use other than single worker, multiple worker might send the requests to the same port.

Mahesh
  • 171
  • 3
  • 8
  • You need to specify a shared memory `zone` under your `upstream` section so the workers can share state - I had the same issue and it's not clear until you re-read the `max_conns` [directive documentation](http://nginx.org/en/docs/stream/ngx_stream_upstream_module.html#max_conns) which says _"If the server group does not reside in the shared memory, the limitation works per each worker process"_. It's working for you because you only have 1 worker process now and no need to share state, but if you changed it to e.g. 2 or `auto` you'd have the same problem until you add a shared memory zone. – jaygooby Feb 22 '21 at 15:07
  • Not working for me. I've set `worker_processes`, `weight`, and `max_conns` all to 1 and declared a `zone` in the upstream, and I'm still seeing the round-robin go A, A, B, B, A, A, B, B. :/ – Cory Klein Jun 22 '22 at 21:12
  • @jaygooby this doesn't work – A X Jun 23 '22 at 05:45
  • This worker_processes 1 idea also doesn't work – A X Jun 23 '22 at 05:46
  • @CoryKlein and a-x can you post your Nginx configs? – jaygooby Jun 24 '22 at 16:51
0

I don't know why exactly this is happening but it may have something to do with the browser. I encountered the same problem when I was using the browser to send the requests. When I sent the requests from the terminal using curl it was working fine.

Maverick
  • 29
  • 4