I am benchmarking nginx/node.js topologies with the following scenarios:
- Benchmark a single node.js server directly
- Benchmark 2 node.js servers behind nginx (RR-load balanced)
For both benchmarks, "wrk" is used with the following configuration:
wrk -t12 -c20 -d20s --timeout 2s
All node.js instances are identical. On each http GET-request, they iterate over a given number "n" and increment a variable on every loop.
When I perform the test cases, I get the somewhat surprising results outlined below. I do not understand, why the dual node.js setup (topology 2) performs worse on 1 million iterations - it is even worse than the same 1 million loops on topology 1.
1037 req/s (single) vs. 813 req/s (LB)
I certainly do expect a bit of overhead, since the single operation does not have nginx in front of the node.js instance - but the test results seem really strange.
The calls with 10 and 5 million iterations seem to be doing OK because the increase in throughput is as expected.
Is there a reasonable explanation for that behavior?
The test is executed on a single computer; each node.js instance is listening on a different port.
Nginx uses a standard configuration, with nothing else than:
- port 80
- 2 upstream servers
- proxy_pass on "/" route
- 1024 (default) Worker_connections (increase does not change results)
Scenario 1 (single node.js server): n [millions] req/s avg/max [ms] requests 10 134 87.81/166.28 2633 5 271 44.12/88.48 5413 1 1037 11.48/24.99 20049
Scenario 2 (nginx as load balancer in front of 2 node.js servers): n [millions] req/s avg/max [ms] requests 10 220 51.95/124.87 4512 5 431 27.79/152.93 8376 1 813 6.85/35.64 16156 --> ???