1

I have a requirement to create a product which should support 40 concurrent users per second (I am new to working on concurrency)

To achieve this, I tried to developed one hello world spring-boot project. i.e.,

spring-boot (1.5.9)

jetty 9.4.15

rest controller which has get endpoint

code below:

@GetMapping
public String index() {
    return "Greetings from Spring Boot!";
}

App running on machine Gen10 DL360

Then I tried to benchmark using apachebench

75 concurrent users:

ab -t 120 -n 1000000 -c 75 http://10.93.243.87:9000/home/
Server Software:
Server Hostname:        10.93.243.87
Server Port:            9000

Document Path:          /home/
Document Length:        27 bytes

Concurrency Level:      75
Time taken for tests:   37.184 seconds
Complete requests:      1000000
Failed requests:        0
Write errors:           0
Total transferred:      143000000 bytes
HTML transferred:       27000000 bytes
Requests per second:    26893.28 [#/sec] (mean)
Time per request:       2.789 [ms] (mean)
Time per request:       0.037 [ms] (mean, across all concurrent requests)
Transfer rate:          3755.61 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    1  23.5      0    3006
Processing:     0    2   7.8      1     404
Waiting:        0    2   7.8      1     404
Total:          0    3  24.9      2    3007

100 concurrent users:

ab -t 120 -n 1000000 -c 100 http://10.93.243.87:9000/home/
Server Software:
Server Hostname:        10.93.243.87
Server Port:            9000

Document Path:          /home/
Document Length:        27 bytes

Concurrency Level:      100
Time taken for tests:   36.708 seconds
Complete requests:      1000000
Failed requests:        0
Write errors:           0
Total transferred:      143000000 bytes
HTML transferred:       27000000 bytes
Requests per second:    27241.77 [#/sec] (mean)
Time per request:       3.671 [ms] (mean)
Time per request:       0.037 [ms] (mean, across all concurrent requests)
Transfer rate:          3804.27 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    2  35.7      1    3007
Processing:     0    2   9.4      1     405
Waiting:        0    2   9.4      1     405
Total:          0    4  37.0      2    3009

500 concurrent users:

ab -t 120 -n 1000000 -c 500 http://10.93.243.87:9000/home/
Server Software:
Server Hostname:        10.93.243.87
Server Port:            9000

Document Path:          /home/
Document Length:        27 bytes

Concurrency Level:      500
Time taken for tests:   36.222 seconds
Complete requests:      1000000
Failed requests:        0
Write errors:           0
Total transferred:      143000000 bytes
HTML transferred:       27000000 bytes
Requests per second:    27607.83 [#/sec] (mean)
Time per request:       18.111 [ms] (mean)
Time per request:       0.036 [ms] (mean, across all concurrent requests)
Transfer rate:          3855.39 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0   14 126.2      1    7015
Processing:     0    4  22.3      1     811
Waiting:        0    3  22.3      1     810
Total:          0   18 129.2      2    7018

1000 concurrent users:

ab -t 120 -n 1000000 -c 1000 http://10.93.243.87:9000/home/
Server Software:
Server Hostname:        10.93.243.87
Server Port:            9000

Document Path:          /home/
Document Length:        27 bytes

Concurrency Level:      1000
Time taken for tests:   36.534 seconds
Complete requests:      1000000
Failed requests:        0
Write errors:           0
Total transferred:      143000000 bytes
HTML transferred:       27000000 bytes
Requests per second:    27372.09 [#/sec] (mean)
Time per request:       36.534 [ms] (mean)
Time per request:       0.037 [ms] (mean, across all concurrent requests)
Transfer rate:          3822.47 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0   30 190.8      1    7015
Processing:     0    6  31.4      2    1613
Waiting:        0    5  31.4      1    1613
Total:          0   36 195.5      2    7018

From above test run, I achieved ~27K per second with 75 users itself but it looks increasing the users also increasing the latency. Also, we can clearly note connect time is increasing.

I have requirement for my application to support 40k concurrent users (assume all are using own separate browsers) and request should be finished within 250 milliseconds.

Please help me on this

Community
  • 1
  • 1
Raashith
  • 155
  • 3
  • 16
  • have you instrumented your code yet? see where most of your time spent is? Where is your bottleneck? (cpu? memory? gc? disk? network?) what is your server configuration like? (threadpool? connectors? etc) what typical tweaks on your OS have you performed for high load? - Also be very careful on your [choice of benchmark](https://webtide.com/lies-damned-lies-and-benchmarks-2/). – Joakim Erdfelt Apr 02 '19 at 13:17
  • Yes I have tweeked my OS by applying https://www.eclipse.org/jetty/documentation/current/high-load.html – Raashith Apr 02 '19 at 14:00
  • After investing lot of the time, found below: 1. Hyper threaded is enabled in the VM where application is running. ie., 8 physical core is converted to 8 physical core + 8 logical core = 16 cores after hyper threaded. 2. During load testing, I can see only 800% to 900% of CPU is used by the java process i.e., java process is using 8 to 9 cores not more than that. and overall system cpu percentage is upto 40% to 50% Now the question is, why java is not occupying until 1500% to 1600% of cpu cycle? – Raashith Apr 10 '19 at 07:09

1 Answers1

2

I am also not a grand wizard in the topic myself but here is some advice:

  • there is a hard limit how many request can handle one instance so if you want to support a lot of user you need more instance
  • if you work with multiple instance then you have to somehow distribute the requests among the instances. One popular solution is Netflix Eureka
  • if you don't want to maintain additional resources and the product will run in cloud then use the provided load balancing services (e.g. LoadBalancer on AWS)
  • also you can fine-tune your server's connection pool settings
Rashin
  • 726
  • 6
  • 16