1

I use wrk to test cluster performance,both wrk and pod(nginx) are on the same vm。

1、wrk->podIp

./wrk -H "Connection: Close" -t 4 -c 300 -d30  http://{podIp}:80
Running 30s test @ http://{podIp}:80
  4 threads and 300 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    20.75ms   24.25ms 838.73ms   97.79%
    Req/Sec     2.16k   450.42     3.49k    69.23%
  258686 requests in 30.07s, 208.46MB read
  Socket errors: connect 0, read 0, write 5, timeout 0
Requests/sec:   8603.95
Transfer/sec:      6.93MB

2、wrk->ClusterIp->pod(only one pod)

10% performance decrease

./wrk -H "Connection: Close" -t 4 -c 300 -d30  http://{ClusterIp}:80
Running 30s test @ http://{ClusterIp}:80
  4 threads and 300 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    29.38ms   14.66ms 249.18ms   90.07%
    Req/Sec     1.90k   351.30     3.31k    72.35%
  227505 requests in 30.05s, 183.34MB read
Requests/sec:   7571.81
Transfer/sec:      6.10MB

2、wrk->ClusterIp->pod(2 pods)

30% performance decrease. In theory, the Requests/sec should be close to 17206(8603 * 2)

./wrk -H "Connection: Close" -t 4 -c 300 -d30  http://{ClusterIp}:80
Running 30s test @ http://{ClusterIp}:80
  4 threads and 300 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    13.15ms    8.93ms 106.11ms   73.09%
    Req/Sec     3.00k     1.04k    6.32k    68.75%
  356342 requests in 30.10s, 287.16MB read
Requests/sec:  11837.60
Transfer/sec:      9.54MB

my configuratin: in /etc/sysctl.conf

net.ipv4.ip_local_port_range = 1024 65000
net.ipv4.tcp_tw_reuse = 1

/etc/security/limits.conf

* soft noproc 102400
* hard noproc 102400
  • Any particular reason you need wrk and pod(nginx) on the same vm? I am guessing you want to eliminate network bottleneck. In any case as you increase load pod and wrk will compete for resources. Be good to see VM CPU and memory charts. – Parth Mehta Feb 07 '20 at 10:16

0 Answers0