0

I'm playing around with the AWS free tier using a t2.micro instance. I made a simple php site with elasticbeanstalk with apache and php7.1. I used the apache ab load test util to send 1000 concurrent requests to a simple 'helloworld.php' page. The average latency was 6.5 seconds. 99% of requests were faster than 13.5 seconds. I then change to using nginx with php 7.1 fpm. The results are similar: 7.5 seconds average, 14 seconds p99.

The CPU isn't spiking above a few percent and memory isn't maxed out. Could it be network i/o latency? I'm not sure how to measure that. Any tips for identifying the bottleneck? Nothing stands out by looking at 'htop' or the ec2 metric graphs while running the load test.

Example load test output:

ab -k -n 1000 -c 1000 -H "Accept-Encoding: gzip, deflate" -g ab_out.dat http://example.com/public_html/api/test.html

Document Path:          /public_html/api/test.html
Document Length:        32 bytes

Concurrency Level:      1000
Time taken for tests:   16.252 seconds
Complete requests:      1000
Failed requests:        0
Keep-Alive requests:    1000
Total transferred:      284000 bytes
HTML transferred:       32000 bytes
Requests per second:    61.53 [#/sec] (mean)
Time per request:       16251.548 [ms] (mean)
Time per request:       16.252 [ms] (mean, across all concurrent requests)
Transfer rate:          17.07 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0  449 339.8    427    1038
Processing:  1037 4113 2784.5   3258   15493
Waiting:     1037 4113 2784.5   3258   15493
Total:       1196 4562 2903.3   3540   16184

Percentage of the requests served within a certain time (ms)
  50%   3540
  66%   4950
  75%   5740
  80%   6591
  90%   9252
  95%  11355
  98%  12383
  99%  12518
 100%  16184 (longest request)

edit, more data points:

I also experience the same latency when hitting an empty html file. The latency is much more reasonable when I decrease the ab -n and -c params from 1000 to 100. However, I'm still interesting in figuring out why the requests get much slower as I move up from 100 to 1000 concurrent requests.

100 request times:

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:       54  128  41.2    131     198
Processing:   169  376 126.4    374     592
Waiting:      169  376 126.5    374     591
Total:        223  504 167.6    505     790
moolagain
  • 33
  • 3
  • I don't have an answer, but I [found something similar](https://www.photographerstechsupport.com/tutorials/hosting-wordpress-on-aws-tutorial-part-4-wordpress-website-optimization/#benchmarking-wordpress). I would consider the testing tool as a possible issue, also I/O. I'm interested in peoples opinions. – Tim May 04 '18 at 05:40

1 Answers1

1

I would recommend considering a more advanced load testing tool which can ramp-up the load gradually, this way you will be able to correlate increasing load with increasing response time and decreasing throughput. Also hammering a single page doesn't have anything in common with real life scenario as both nginx and elastic beanstalk can cache the response.

Well-behaved load test needs to represent end-user activity as close as possible including cookies, headers, cache, handling of embedded resources (images, scripts, fonts, styles), JavaScript calls, etc. which is not the case for ab tool as it downloads main HTML response only without any processing of linked content, sessions, respecting cache-control headers and so on.

So take a look at following alternatives:

Check out Open Source Load Testing Tools: Which One Should You Use? article for more information on the aforementioned tools including sample scripts, reports and feature comparison matrix

Also make sure to monitor operating system health metrics including but not limited to:

  • CPU
  • RAM
  • Network IO
  • Disk IO
  • Swap file usage

You can use either built-in Linux monitoring programs or Amazon CloudWatch or JMeter PerfMon Plugin, this way you will be able to state if performance degradation is caused by lack of resources.

And last but not the least, default nginx configuration is suitable for development and debugging, when it comes to high loads you need to perform some extra configuration.

Dmitri T
  • 551
  • 2
  • 2