1

I have a high load dynamic PHP web service which I have recently moved from Apache2 to Nginx and PHP-FPM. I am finding that the average request latency has increased from 0.5s to 1s since the move.

I'm not sure where the bottleneck in the system is, I had been hoping to decrease the average latency, I know that my machines are not:

  • limited by CPU
  • limited by memory capacity
  • limited by disk IO
  • limited by network IO

Nginx is forwarding the requests to PHP-FPM via a single unix socket.

  • Is it possible that the bottleneck is memory bandwidth?
  • Is there anyway to monitor the status of the unix socket?
  • Would it be better to have a pool of unix sockets and load balance between them?

Here's part of my nginx.conf file:

worker_processes 2; # one for each processor
worker_rlimit_nofile 65536;
...
fastcgi_buffers 256 16k;
fastcgi_buffer_size 32k;
fastcgi_max_temp_file_size 0;
proxy_buffer_size 32k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 32k;

and my php-fpm.conf

listen = /var/run/php5-fpm.sock
listen.backlog = 2048
pm = static
pm.max_children = 64

Does anything stand out as being outlandish or at fault?

sungiant
  • 111
  • 4

1 Answers1

1

Using a single socket should be fine.

A few things to check:

  1. What are the max number of filehandles per process? ulimit -n You may benefit from increasing this.

  2. Enable logging in php-fpm to see how long the requests are taking according it. In pool.d/www.conf:

    access.format = %R - %u %t "%m %r%Q%q" %s %f %{mili}d %{kilo}M %C%%

  3. Use the status module to see what's going on inside nginx : http://wiki.nginx.org/HttpStubStatusModule

chrskly
  • 1,569
  • 12
  • 16