1

I have three servers, the load balancer runs nginx and passes the PHP requests upstream to one of two servers running php-fpm.

I was actually trying to test concurrency in the first place, so the php script on each PHP-FPM server shows a start and end time as well as the hostname, and after the start time is echoed it uses 100% CPU for 5 seconds before echoing the end time.

Neither server hits 100% CPU simultaneously for 4 concurrent requests and the timestamps show they are served consecutively which makes me think nginx and fastcgi is blocking all concurrent connections.

Running ab with 100 concurrent connection sees all processes on one PHP-FPM server (out of 10 available) processing and the other server is completely quiet doing nothing.

The nginx conf is:

upstream  backend  {
     server   192.168.1.60:9000;
    server   192.168.1.61:9000;  
}

server {
    listen   80;
    server_name  localhost;
    access_log  /var/log/nginx/localhost.access.log;

    location / {
        root   /var/www;
        index  index.php;
    }


    location ~ .php$ {
        fastcgi_split_path_info ^(.+\.php)(.*)$;
        fastcgi_pass   backend;
        fastcgi_index  index.php;
        fastcgi_param  SCRIPT_FILENAME  /var/www$fastcgi_script_name;
        include fastcgi_params;
    }

}
Paul Ridgway
  • 119
  • 1
  • 6

3 Answers3

1

Don't go passing FCGI over the network; use regular ol' HTTP for as long as you can. You're entering poorly-tested waters the way you're going.

At any rate, I really don't recommend using nginx as a load balancer. It really isn't the best (or even a "good enough") tool for the job. I think the best option is Linux Virtual Server, as it is transparent to the TCP connections and blisteringly fast, but if that's off the table for some reason, at the very least use haproxy.

womble
  • 96,255
  • 29
  • 175
  • 230
  • While this is off topic, can you elaborate on why LVS would be best. I have no specific reason to use Nginx so I'm happy to hear better suggestions. Out of interest, why should I not use nginx? – Paul Ridgway Aug 14 '11 at 21:42
  • "It really isn't the best (or even a "good enough") tool for the job." – womble Aug 14 '11 at 21:52
  • 1
    That's not a reason, that's an opinion. – Paul Ridgway Aug 14 '11 at 21:55
  • What else has anyone got to go on? Find me one absolutely objective study on the benefits of any software package relative to another. – womble Aug 14 '11 at 22:04
  • S/O reminded me of this - evidence for one, bench-marking, etc - references to source code that show reasons for poor performance for another, and so on, and so on... yes, it may always be subjective but spouting some inane opinion is not all anyone has to go on. – Paul Ridgway Apr 22 '21 at 20:01
0

I would suggest you to do the opposite thing. Leave this server as a load balancer and do the FPM config on the backends only. FastCGI is better served if nginx and fpm are in the same server talking over sockets or localhost, so it's something like this setup:

                          ---- backend 1 with nginx + php fpm
server with two backends |
                          ---- backend 2 with nginx + php fpm

that way if there's something blocking one of the backends it will not affect the second one and the load balancer will continue to serve stuff from it. Also be sure to tune the number of fpm children even if you are using the dynamic settings.

coredump
  • 12,713
  • 2
  • 36
  • 56
  • Thanks I was considering that, I was trying to keep memory usage as low as possible on the PHP boxes. Having looked further I see nginx uses little RAM :) – Paul Ridgway Aug 14 '11 at 21:44
0

If you really need to farm out to FastCGI servers instead of nginx + php-fpm boxes, you can try the fair module for nginx found here. The plugin assesses reposponse time from each of the backends and rotates respectively. Note that this will require you to recompile nginx.

If you don't want that, make sure at least that you are not using the ip_hash directive, since you will not get rotated when requesting over a benchmark (since the source IP is always the same) and try least_conn (found in nginx >= 1.2.2). More information here. Finally, adjust your criteria for nginx selecting the next server using fastcgi_next_upstream

fastcgi_next_upstream timeout http_503 http_500 invalid_header

in order to mitigate timeouts on overly loaded nodes. More on that, here

priestjim
  • 669
  • 3
  • 8