0

In the book Tomcat The Definitive Guide written by Jason Brittain with Ian F.Darwin, when using the ab tool to benchmark, the writters says,

you should benchmark by running a minimum of 100,000 HTTP requests. Also , you may configure the test client to spawn as many client threads as you would like, but you will not get helpful results if you set it higher than the maxThreads you set for you Connector in your Tomcat's conf/server.xml file. By default, it is set to 150.

Then the writter recommends 149.

In my case, with 149 client threads, the running result is:

[user@apachetomcat ~]$ ab -k -n 100000 -c 149 http://10.138.0.2:8080/test.html
This is ApacheBench, Version 2.3 <$Revision: 1430300 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking 10.138.0.2 (be patient)
Completed 10000 requests
Completed 20000 requests
Completed 30000 requests
Completed 40000 requests
Completed 50000 requests
Completed 60000 requests
Completed 70000 requests
Completed 80000 requests
Completed 90000 requests
Completed 100000 requests
Finished 100000 requests

Server Software:        
Server Hostname:        10.138.0.2
Server Port:            8080
Document Path:          /test.html
Document Length:        13 bytes
Concurrency Level:      149
Time taken for tests:   45.527 seconds
Complete requests:      100000
Failed requests:        0
Write errors:           0
Keep-Alive requests:    99106
Total transferred:      23195530 bytes
HTML transferred:       1300000 bytes
Requests per second:    2196.48 [#/sec] (mean)
Time per request:       67.836 [ms] (mean)
Time per request:       0.455 [ms] (mean, across all concurrent requests)
Transfer rate:          497.54 [Kbytes/sec] received
Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    1   6.8      0      70
Processing:    66   67   5.6     67     870
Waiting:       66   67   5.6     67     870
Total:         66   68   8.8     67     870
Percentage of the requests served within a certain time (ms)
  50%     67
  66%     67
  75%     67
  80%     67
  90%     67
  95%     68
  98%     69
  99%    133
 100%    870 (longest request)

After increasing to 1000 client threads, the result is:

[user@apachetomcat ~]$ ab -k -n 100000 -c 1000 http://10.138.0.2:8080/test.html
This is ApacheBench, Version 2.3 <$Revision: 1430300 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking 10.138.0.2 (be patient)
Completed 10000 requests
Completed 20000 requests
Completed 30000 requests
Completed 40000 requests
Completed 50000 requests
Completed 60000 requests
Completed 70000 requests
Completed 80000 requests
Completed 90000 requests
Completed 100000 requests
Finished 100000 requests
Server Software:        
Server Hostname:        10.138.0.2
Server Port:            8080
Document Path:          /test.html
Document Length:        13 bytes
Concurrency Level:      1000
Time taken for tests:   7.205 seconds
Complete requests:      100000
Failed requests:        0
Write errors:           0
Keep-Alive requests:    99468
Total transferred:      23197340 bytes
HTML transferred:       1300000 bytes
Requests per second:    13879.80 [#/sec] (mean)
Time per request:       72.047 [ms] (mean)
Time per request:       0.072 [ms] (mean, across all concurrent requests)
Transfer rate:          3144.28 [Kbytes/sec] received
Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    1   8.1      0      68
Processing:    66   69  22.3     67    1141
Waiting:       66   69  22.3     67    1141
Total:         66   70  27.5     67    1141
Percentage of the requests served within a certain time (ms)
  50%     67
  66%     67
  75%     68
  80%     68
  90%     69
  95%     71
  98%     87
  99%    139
 100%   1141 (longest request)

The Requests per second increases from 2196.48/sec to 13879.80/sec, so I think this change is meaningful.
Why does the writer think it's not helpful when we set it higher than the maxThreads?
What does the incrementation of requests per second mean in my case?
I'm confused with the requests per second. It's very important to understand the writer's benchmarks in the following chapters of the book.

niaomingjian
  • 3,472
  • 8
  • 43
  • 78
  • Tomcat is probably running non-blocking IO connectors now, so that it can handle more connections than it has IO threads. And if the page itself is very light (is that a static page?) then the worker threads won't become the bottleneck either. – Thilo Jan 19 '17 at 12:10
  • @Thilo Yes, it's a static page. I'm trying to determine the maximum number of requests per second my server can successfully handle. How should I do to achieve this with the latest tomcat? – niaomingjian Jan 19 '17 at 12:22
  • It seems you are doing that. Keep throwing on more threads (as long as the client can handle that). But maybe you want to test with a more meaningful page (with database access and so forth). Static pages are unlikely to become a bottleneck. – Thilo Jan 19 '17 at 12:23
  • The writer says JIO is the default connector implementation and it's a fully blocking implementation – niaomingjian Jan 19 '17 at 12:24
  • You can verify that by looking at your Tomcat configuration. But that seems odd these days. NIO is pretty solid now. (It seems Tomcat 8.5 and 9 have dropped the blocking connector altogether: http://stackoverflow.com/a/11034189/14955) – Thilo Jan 19 '17 at 12:25
  • This's JIO, isn't it? – niaomingjian Jan 19 '17 at 12:28

0 Answers0