17

We use Nginx as load-balancer for our websocket application. Every backend server keeps session information so every request from client must be forwarded on the same server. So we use ip_hash directive to achieve this:

upstream app {
    ip_hash;
    server 1;
}

The problem appears when we want to add another backend server:

upstream app {
    ip_hash;
    server 1;
    server 2;
}

New connections go to server 1 and server 2 - but this is not what we need in this situation as load on server 1 continues to increase - we still need sticky sessions but least_conn algorithm enabled too - so our two servers receive approximately equal load.

We also considered using Nginx-sticky-module but the documentaton says that if no sticky cookie available it will fall back to round-robin default Nginx algorithm - so it also does not solve a problem.

So the question is can we combine sticky and least connections logic using Nginx? Do you know which other load balancers solve this problem?

Alex Emelin
  • 814
  • 1
  • 9
  • 19
  • Perhaps this should be moved to serverfault to get an answer? – Collector Aug 14 '16 at 04:37
  • Interesting question is why "load on server 1 continues to increase" - could it be the case that majority of your users sit behind the same or few NAT? In this case hashing on source IP is just not efficient and you may consider using more sophisticated key via `hash` directive as opposed to `ip_hash'. For instance, you may wish to add some user specific URI part or parameter to the key... – wick Oct 07 '17 at 18:47
  • Useful answers, but I'm not sure they've answered the question originally asked. Alex? – Dave Child Oct 14 '17 at 11:35

2 Answers2

8

Probably using the split_clients module could help

upstream app {
    ip_hash;
    server 127.0.0.1:8001;
}

upstream app_new {
    ip_hash;
    server 127.0.0.1:8002;
}

split_clients "${remote_addr}AAA" $upstream_app {
    50% app_new;
    *   app;
}

This will split your traffic and create the variable $upstreap_app the one you could use like:

server {
   location /some/path/ {
   proxy_pass http://$upstream_app;
}

This is a workaround to the least_conn and the load balancer that work with sticky sessions, the "downside" is that if more servers need to be added, a new stream needs to be created, for example:

split_clients "${remote_addr}AAA" $upstream_app {
    30% app_another_server;
    30% app_new;
    *   app;
}

For testing:

for x in {1..10}; do \
  curl "0:8080?token=$(LC_ALL=C; cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 32 | head -n 1)"; done

More info about this module could be found in this article (Performing A/B testing)

nbari
  • 25,603
  • 10
  • 76
  • 131
  • Note, `split_clients` will use consistent hashing for load balancing which may or may not be better than round-robin, but is NOT what op asks (least_conn based load balancing) – wick Oct 07 '17 at 18:40
6

You can easily achieve this using HAProxy and I indeed suggest going through it thoroughly to see how your current setup can benefit.

With HA Proxy, you'd have something like:

backend nodes
    # Other options above omitted for brevity
    cookie SRV_ID prefix
    server web01 127.0.0.1:9000 cookie check
    server web02 127.0.0.1:9001 cookie check
    server web03 127.0.0.1:9002 cookie check

Which simply means that the proxy is tracking requests to-and-fro the servers by using a cookie.

However, if you don't want to use HAProxy, I'd suggest you setup you change your session implementation to use an in-memory DB such as redis/memcached. This way, you can use leastconn or any other algorithm without worrying about sessions.

Chibueze Opata
  • 9,856
  • 7
  • 42
  • 65