0

I am hosting several (~30) different sites on one server with apache2+fastcgi+suexec+php5. Sites have different loads and different execution times of their scripts (some of them process request for 5-7 seconds, some <1sek).

Sometimes when single site receives very high load (all php instances of this site are created and used) - whole apache server hangs. Apache (worker mpm) creates new processes up to the upper limit. It looks like it is starting to queue ALL new request for EVERY site, not only the one that has high load and quickly achieves process limits... restart of apache solves the problem...

config: FastCgiConfig -singleThreshold 1 -multiThreshold 10 -listen-queue-depth 30 -maxProcesses 80 -maxClassProcesses 12 -idle-timeout 30 -pass-header HTTP_AUTHORIZATION -pass-header If-Modified-Since -pass-header If-None-Match

(earlier have default -listen-queue-depth = 100, but it didn't change anything...)

Any suggestions?

Another question - how is implemented this listen queue? is it one queue for whole apache, or unique queue for every defined php apllication (suexec site)?

I would like to achieve something like this: when one site receives high load and its queue is full - server bounces next request, but only for this one site.. Other sites should work properly...

  • it also produces errors for high loaded services: FastCGI: comm with (dynamic) server "/php.fcgi" aborted: (first read) idle timeout (30 sec) –  Jan 19 '10 at 14:47
  • I'm interested whether you resolved this issue, since I face similar problems on a webserver. – Bram Schoenmakers Oct 06 '10 at 12:03

4 Answers4

1

Apache 2.4 offers a new fastcgi proxy module (mod_proxy_fcgi) that can proxy requests to php-fpm. Using mod_proxy as an intermediary means you have access to all the mod_proxy options, including queueing and depletion parameters separate from the main server.

I would advise you to set it up on a test server with the apache 2.4 event MPM and php-fpm; you can tune each php pool for different applications, too.

adaptr
  • 16,576
  • 23
  • 34
0

Have you tried mod_fcgid instead? It's much better in handling high load at your server.

Vladislav Rastrusny
  • 2,671
  • 12
  • 42
  • 56
0

If fastcgi is spinning up the php scripts as a user process, then the /etc/security/limits.conf definitions (specifically nproc) should be enforced by the OS.

i.e: apache will try to spin up the process as that user, and the the OS will kill the process because it exceeded the process limit.

this is sort of a kludge, though; if the machine is otherwise idle, you'll still be killing connections.

why don't you just fork your larger client off to a dedicated machine? or spin up a secondary apache, listening on a high port, with a set footprint/runtime allowance? you could use mod_proxy to pass it requests transparently.

that said, i'm not too familiar with fastcgi, so there might be some quota system available already; quick read through the docs didn't net anything, though.

MrTuttle
  • 1,176
  • 5
  • 5
0

You can view a series of web tutorials here: http://blog.stuartherbert.com/php/category/the-web-platform/

I personally find those very insightful! With this tutorial possibly being just as helpful: http://blog.stuartherbert.com/php/2008/10/07/can-you-secure-a-shared-server-with-php-fastcgi/

I'd recommend honestly moving the high traffic site to its own machine if it is using that much resources.

dab
  • 149
  • 7