0

My application is served on an Amazon EC2 t2.small instance.

When using the ab command to test performance.

ab -n 3000 -c 100 mydomain.com

HTTP

Finished 3000 requests


Server Software:        nginx/1.8.1
Server Hostname:        www.mydomain.com
Server Port:            80

Document Path:          /
Document Length:        184 bytes

Concurrency Level:      100
Time taken for tests:   3.201 seconds
Complete requests:      3000
Failed requests:        0
Non-2xx responses:      3000
Total transferred:      1401000 bytes
HTML transferred:       552000 bytes
Requests per second:    937.09 [#/sec] (mean)
Time per request:       106.714 [ms] (mean)
Time per request:       1.067 [ms] (mean, across all concurrent requests)
Transfer rate:          427.36 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:       15   43  50.0     33     783
Processing:    19   53  79.2     37    1437
Waiting:       17   47  75.1     34    1437
Total:         41   96  93.3     71    1463

Percentage of the requests served within a certain time (ms)
  50%     71
  66%     79
  75%     85
  80%     91
  90%    160
  95%    175
  98%    478
  99%    573
 100%   1463 (longest request)

HTTPS

Finished 3000 requests


Server Software:        nginx/1.8.1
Server Hostname:        www.mydomain.com
Server Port:            443
SSL/TLS Protocol:       TLSv1.2,ECDHE-RSA-AES256-GCM-SHA384,1024,256

Document Path:          /
Document Length:        212 bytes

Concurrency Level:      100
Time taken for tests:   23.034 seconds
Complete requests:      3000
Failed requests:        124
   (Connect: 0, Receive: 0, Length: 124, Exceptions: 0)
Non-2xx responses:      2876
Total transferred:      21357330 bytes
HTML transferred:       20773030 bytes
Requests per second:    130.24 [#/sec] (mean)
Time per request:       767.790 [ms] (mean)
Time per request:       7.678 [ms] (mean, across all concurrent requests)
Transfer rate:          905.49 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:      105  521 170.7    584     892
Processing:    20  228 264.1    153    2629
Waiting:       17  149 131.1    111     772
Total:        201  749 214.4    714    3124

Percentage of the requests served within a certain time (ms)
  50%    714
  66%    740
  75%    745
  80%    755
  90%    798
  95%    903
  98%   1484
  99%   2002
 100%   3124 (longest request)

The relevants parts of my nginx.conf:

user                 ec2-user ec2-user;
worker_processes     4;

error_log            /home/ec2-user/.log/error.log error;
pid                  /var/run/nginx.pid;

#timer_resolution    500ms;
worker_rlimit_nofile 8192;

events {
  worker_connections 4096;
}

http {
  include       /etc/nginx/mime.types;
  default_type  application/octet-stream;
  autoindex     off;

  index index.php index.html index.htm;

  sendfile           on;
  tcp_nopush         on;
  tcp_nodelay        off;
  keepalive_timeout  60;
  keepalive_requests 10;

  client_max_body_size      20M;
  client_body_timeout       60;
  client_body_buffer_size   10M;
  client_header_timeout     60;
  client_header_buffer_size 1k;

  #server_names_hash_max_size    512;
  server_names_hash_bucket_size 128;

  gzip              on;
  gzip_disable      "msie6";
  gzip_vary         on;
  gzip_proxied      any;
  gzip_comp_level   6;
  gzip_buffers      16 8k;
  gzip_http_version 1.1;
  gzip_min_length   1000;
  gzip_types        text/plain text/css application/json application/x-javascript text/xml application/xml application/rss+xml text/javascript image/svg+xml application/vnd.ms-fontobject application/x-font-ttf font/opentype;
}

and mydomain.conf

ssl                       on;
ssl_session_timeout       5m;
ssl_protocols             TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers               ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES256-GCM-SHA384:AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:AES256-SHA:AES128-SHA:DES-CBC3-SHA:HIGH:!aNULL:!eNULL:!EXPORT:!DES:!MD5:!PSK:!RC4;
ssl_prefer_server_ciphers on;
ssl_dhparam               /home/ec2-user/.nginx/dhparam.pem;

ssl_stapling              on;
ssl_stapling_verify       on;

resolver                  8.8.4.4 8.8.8.8 valid=300s;
resolver_timeout          10s;

add_header                Strict-Transport-Security max-age=63072000;
add_header                X-Content-Type-Options    nosniff;
Claytinho
  • 513
  • 1
  • 4
  • 6

1 Answers1

1

My best guess is latency between the test site and the server is a significant factor. SSL requires many round trips to set up the SSL connection, and Apache Bench is fairly simple and I don't think it reuses connections. I suspect if you do the test from a spot instance in the same AZ the SSL stats will go up, just because of latency.

Behind that is general overhead. SSL takes more resources to set up, and you're running 100 connections at a small instance.

You can have a look at some of my benchmarking on a t2.micro with Wordpress and Nginx here. The most interesting measurement is for serving static html files, which was tested from the server itself, using http and https. There was no latency as it's on the same machine. The time in ms is total transaction time, I think, I did the testing a while back.

HTTP: 440tps, 10ms
HTPS: 166tps, 145ms

I'll be interested to see what others come up with, because I don't think I've fully solved this one either.

Nginx fast_cgi caching made a HUGE difference to Wordpress performance. It went from 10tps to 1000tps, because it didn't have to call into PHP, and the Nginx page cache is kept in RAM. You just have to be careful to set things up so logged in users aren't served cached pages, and their pages don't end up in the cache.

Tim
  • 31,888
  • 7
  • 52
  • 78