2

I am running some siege tests on my nginx server. The bottleneck doesn't seem to be cpu or memory so what is it?

I try to do this on my macbook:

sudo siege -t 10s -c 500 server_ip/test.php

The response time goes to 10 seconds, I get errors and siege aborts before completing.

But I if run the above on my server

siege -t 10s -c 500 localhost/test.php

I get:

Transactions:               6555 hits
Availability:              95.14 %
Elapsed time:               9.51 secs
Data transferred:         117.30 MB
Response time:              0.18 secs
Transaction rate:         689.27 trans/sec
Throughput:            12.33 MB/sec
Concurrency:              127.11
Successful transactions:        6555
Failed transactions:             335
Longest transaction:            1.31
Shortest transaction:           0.00

I also noticed for lower concurrent figures, I get vastly improved transaction rate on localhost compared to externally.

But when the above is running on localhost the CPU usage is low, memory usage is low on HTOP. So I'm confused how I can boost performance because I can't see a bottleneck.

ulimit returns 50000 because I've increased it. There are 4 nginx worker processes which is 2 times my cpu cores. Here are my other settings

worker_rlimit_nofile 40000;

events {
        worker_connections 20000;
        # multi_accept on;
}

  tcp_nopush on;
  tcp_nodelay on;
  keepalive_timeout 65;
  types_hash_max_size 2048;

The test.php is just a echo phpinfo() script, nothing else. No database connections.

The machine is an AWS m3 large, 2 cpu cores and about 7gb of ram I believe.

Here is the contents of my server block:

      listen 80 default_server;
        listen [::]:80 default_server ipv6only=on;

        root /var/www/sitename;
        index index.php index.html index.htm;

        # Make site accessible from http://localhost/
        server_name localhost;

        location / {
                try_files $uri $uri.html $uri/ @extensionless-php;
        }

location @extensionless-php {
    rewrite ^(.*)$ $1.php last;
}

        error_page 404 /404.html;

        error_page 500 502 503 504 /50x.html;
        location = /50x.html {
                root /usr/share/nginx/html;
        }

        # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
        #
        location ~ \.php$ {
        try_files $uri =404;
        fastcgi_split_path_info ^(.+\.php)(/.+)$;
        fastcgi_pass unix:/var/run/php5-fpm.sock;
        fastcgi_index index.php;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        include fastcgi_params;
        }

Also this was in my error log:

connect() to unix:/var/run/php5-fpm.sock failed (11: Resource temporarily unavailable) while connecting to upstream, cli$
Hard worker
  • 3,916
  • 5
  • 44
  • 73
  • Show your `server { } ` part of the nginx config, so that we can see how php is called and which ips nginx listens to and how. – Marki555 Aug 02 '15 at 13:23
  • @Marki555 I have added that. I have some rules added so it checks if the url is to a php file. – Hard worker Aug 02 '15 at 13:38
  • It's strange that on local ip it is slower than on localhost. Post also `netstat -anp|grep nginx`. I would try the benchmark with static file only and with minimal default nginx config to figure out where the slowdown is. – Marki555 Aug 02 '15 at 13:42
  • If the equivalent test on a static file shows "no problems" you should be looking at your phpfpm config. Also, read your error logs, as if there are failed requests it'll say why. – AD7six Aug 02 '15 at 13:42
  • Sorry @marki555, I should make clear that the first siege test is run on my macbook. The second siege test is run on the server. – Hard worker Aug 02 '15 at 13:58
  • I am going to benchmark with a static file – Hard worker Aug 02 '15 at 13:59
  • @marki555 benchmarking on a static file. With 100 concurrent connections it responds really quick both on my server and on my macbook. With 500 concurrent, it responds fine on my server, on my macbook it starts fine and then suddenly it says: "descriptor table full sock.c:132: Too many open files" – Hard worker Aug 02 '15 at 14:02
  • @AD7six This was in my error log: connect() to unix:/var/run/php5-fpm.sock failed (11: Resource temporarily unavailable) while connecting to upstream, cli$. But that was from earlier. Not from at the time I ran the static file test – Hard worker Aug 02 '15 at 14:07
  • @Hardworker there is no way that error message come's from phpfpm's log file. You are looking at the wrong log file. – AD7six Aug 02 '15 at 14:42
  • Have you modified your php-fpm config? There are several options with that, to determine how many workers are available. – datasage Aug 02 '15 at 15:13
  • @ad7six sorry that is from my nginx conf file. There is nothing in my phpfm log file. Do you think it's possible this is an AWS bandwidth limiting issue? The only difference I can think of between running the siege on the server and on my macbook is via my macbook there is bandwidth leaving the AWS server – Hard worker Aug 02 '15 at 15:43
  • @datadage yes I have. As you can see though, running the siege test on localhost present problems but does on my macbook. I'm not sure it would make sense for workers to fail on only one of those approaches. – Hard worker Aug 02 '15 at 15:44
  • The default configuration for php-fpm is pretty low in terms of workers that it will start and allow. If you are going to increase nginx by as much as you are, you need to look at the php-fpm.conf file and increase workers with that as well. There are 3 different models for determining workers, you may want to expirement what works best for you. – datasage Aug 02 '15 at 15:53

0 Answers0