0

More specific, will this work ?

upstream backend {
    hash $request_uri consistent;

    server backend1.example.com weight=1;
    server backend2.example.com weight=2;
}

will backend2.example.com receive twice as much traffic ?

Also, what happens if a weight is changed or another server is added to the mix. Will the "only few keys will be remapped" still hold ?

The optional consistent parameter of the hash directive enables ketama consistent hash load balancing. Requests will be evenly distributed across all upstream servers based on the user-defined hashed key value. If an upstream server is added to or removed from an upstream group, only few keys will be remapped which will minimize cache misses in case of load balancing cache servers and other applications that accumulate state.

from https://www.nginx.com/resources/admin-guide/load-balancer/

mihaic
  • 135
  • 1
  • 7
  • 1
    I've experimented a bit with this and it looks like the weight attribute is respected. `backend2.example.com` does indeed receive double the traffic (well, of course, taking into account also the request_uri). Still not sure about the remapping. – mihaic Mar 08 '17 at 11:36
  • op, any update on this? – adiggo Jan 05 '18 at 20:08

1 Answers1

0

In this configuration, consistent hash is more important than weight.

In other words, if one upstream presents both weight and a consistent hash, then the main thing will be a consistent hash.

And hashes are distributed to the servers according to the weight.

upstream consistent_test {
    server consistent_test.example.ru:80 weight=90;
    server consistent_test2.example.ru:80 weight=10;
    hash $arg_consistent consistent;
}

Experiment

1) Default state

upstream balancer_test {

hash $arg_someid consistent;
server server1.example.ru:8080;
server server2.example.ru:8080;
server server3.example.ru:8080 down;

}

Request hashes pined to hosts:

server1.example.ru ==> 535

server2.example.ru ==> 462

server3.example.ru ==> 0

2) First step: enable the node and set the weight

upstream balancer_test {

hash $api_sessionid consistent;
server server1.example.ru:8080 weight=250;
server server2.example.ru:8080 weight=500;
server server3.example.ru:8080 weight=250;

}

Request hashes pined to hosts:

server1.example.ru:8080 ==> 263

server2.example.ru:8080 ==> 473

server3.example.ru:8080 ==> 254

3) The second step: Finish the translation of traffic and disable the old node

upstream balancer_test {

hash $api_sessionid consistent;
server1.example.ru:8080 down;
server2.example.ru:8080;
server3.example.ru:8080;

}

Request hashes pined to hosts:

server1.example.ru:8080 ==> 0

server2.example.ru:8080 ==> 533

server3.example.ru:8080 ==> 464

server1.example.ru:

1) before = 463

2) on step_2 = 533

3) hash hits = 306

server2.example.ru:

1) before = 536

2) on step_1 = 263

3) hash hits = 148

server3.example.ru:

1) before = 255

2) on step 1 = 464

3) hash hits = 115

therb1
  • 16
  • 2