2

So I've created a simple file, ab.htm, with just "test" in it.

ab -n 1000 -c 10 http://www.domain.com/ab.htm

gives me 15400req/sec

and

ab -n 1000 -c 10 https://www.domain.com/ab.htm

gives me 390req/sec

If I add the -k Keep-Alive flag, it comes back up to ~10,000. But that's not a solution, if I get 1,000 concurrent users they're not all going to share the same connection...

This is on a 4GB Centos 6 VPS, nginx 1.5.6.

I tried it at concurrencies of 1, 100 & 1000 too and got similar results.

I was expecting it to be slower, but not FORTY times slower.... is this normal, or has something gone horribly wrong? If it is normal, what can I do to improve the situation - weaker cyphers etc I guess?

And yes, I appreciate that this is a tiny part of the puzzle, and relatively insignificant compared to scripting and database loads. But still, I'd like to at least know that it's normal.

Thanks


Additional info:

  • CentOS 6.4
  • Intel E5-2640 CPU
  • Xen VPS (on a HP DL380p Gen8 Proliant Server, I think)
  • 4GB ram

Versions etc:

uname -a

Linux 2.6.32-358.18.1.el6.x86_64 #1 SMP Wed Aug 28 17:19:38 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux

openssl version

OpenSSL 1.0.1e 11 Feb 2013

nginx -V

nginx version: nginx/1.5.6 built by gcc 4.4.7 20120313 (Red Hat 4.4.7-3) (GCC) TLS SNI support enabled configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --pid-path=/var/run/nginx.pid --http-log-path=/var/log/nginx/access.log --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --user=nginx --group=nginx --with-http_ssl_module --with-http_realip_module --with-http_addition_module --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_random_index_module --with-http_secure_link_module --with-http_stub_status_module --with-mail --with-mail_ssl_module --with-file-aio --with-http_spdy_module

Codemonkey
  • 1,086
  • 4
  • 19
  • 41

1 Answers1

1

a significant slowdown is to be expected but 300 rps is too slow; i did some tests recently and those are my results, to give you some numbers and relations:

  • http: ~ 30.000 rps
  • https w/out keepalive: ~ 9.000 rps
  • https w/ keepalive: ~ 18.000 rps

what your need to do:

  • tune the right numbers of workers in nginx.conf (workes == numbers of processors)
  • enable ssl_session_cache shared
  • test different cipher-suites for performance (tbd from my site)
  • check out this guide for more nginx-based ssl + perf-tuning-infos

390/rps i'd expect from apache ... SCNR :)

  • 1) Already set workers to cores (6) – Codemonkey Oct 09 '13 at 10:53
  • 2) Setting "ssl_session_cache shared:SSL:10m;" and restarting nginx made negligible difference. – Codemonkey Oct 09 '13 at 10:53
  • ok, then i'd suggest cipher-tuning, and you might want to check with ssllabs.com. and please retest with -c 100 – that guy from over there Oct 09 '13 at 14:22
  • I've tried a whole bunch of different ciphers now, including some of the weakest ones. They make very little difference, ranging from 350 to 400 req/sec. This tells me that the problem isn't with the cipher, but something else that's causing a big delay... – Codemonkey Oct 09 '13 at 15:17