1

I am currently setting up a VPS (with VPS.NET) which I will be using to host a blog and some other stuff. I've installed nginx, and patched php (5.2.8) with php-fpm.

All works great (and extremely fast!), apart from one annoying issue: because the website has no other traffic than mine for now, after a while it has been in idle all the php-cgi processes die. As soon as I then try to visit the website, nginx returns a nice "502 bad gateway", and to fix I have to manually restart php-fpm to get the website to work again. Why is that? I have seen this is somehow a common problem but after a few days looking for a solution in my spare time, I have found none that works for me.

Any suggestions? Thanks in advance

Vito Botta
  • 327
  • 1
  • 7
  • 14

2 Answers2

1

I am running a very similar set up (nginx 0.7.61, php+fpm 5.2.10) and my PHP processes never die, even after being idle.

The process control bits of my php-fpm.conf are as follows:

<value name="pm">
    <value name="style">static</value>
    <value name="max_children">3</value>
    <value name="apache_like">
        <value name="StartServers">20</value>
        <value name="MinSpareServers">5</value>
        <value name="MaxSpareServers">35</value>
    </value>
</value>
<value name="request_terminate_timeout">0s</value>
<value name="request_slowlog_timeout">0s</value>
<value name="slowlog">logs/slow.log</value>
<value name="rlimit_files">1024</value>
<value name="rlimit_core">0</value>
<value name="chroot"></value>
<value name="chdir"></value>
<value name="catch_workers_output">yes</value>
<value name="max_requests">500</value>

Note that while I have the apache-like bits defined (they were part of the default config), they aren't used because I have PM style static set.

You can turn up the logs to debug level using this in the global options section:

<value name="log_level">debug</value>

to see if there's a reported reason that it's shutting down workers.

A latch-ditch fix if this doesn't work would be to have a service like pingdom hit a php on your site every x minutes, but my experience with this software combo doesn't suggest that this should be required.

James F
  • 6,689
  • 1
  • 26
  • 24
  • Hi, thanks for replying. I have changed the log_level parameter, I will let you know what I see. For now in php-fpm.log I see "notices" like this: "[WARNING] fpm_children_bury(), line 215: child 26275 (pool default) exited on signal 15 SIGTERM after 803.236849 seconds from star" every time that problem happens. – Vito Botta Jul 24 '09 at 13:32
  • ops.. I meant warnings. Soon before I see this notice: "fpm_got_signal(), line 48: received SIGCHLD" – Vito Botta Jul 24 '09 at 13:32
  • SIGCHLD is sent to a parent process when a child dies, the SIGTERM line is the parent reaping the child and determining why it exited. The question is whether the child is dying or the parent is killing it due to it hitting max_requests. What PHP app are you hosting here, and is there any chance that part of it is misbehaved in such a way that it sends itself TERM under some condition? – James F Jul 24 '09 at 13:42
  • As said in the other comment, I don't even reach the max_requests. You said you have max_children set to 3. What is the value of When emergency_restart_threshold in your configuration? – Vito Botta Jul 24 '09 at 13:50
  • "When this amount of php processes exited with SIGSEGV or SIGBUS ... 10 " Could this be the problem - that is, this value being lower than max_children? – Vito Botta Jul 24 '09 at 13:51
  • If you're sure that php-fpm isn't killing the worker, then it sounds like something else is, either externally or from within the PHP application itself (it's possible, though usually not recommended to send yourself SIGTERM). I don't think the emergency restart bits are in play here, as they are for catastrophic failure (segment violation or bus errors, the textbook "crash"). You're exiting with SIGTERM, which is the clean way for a process to exit. The question is why you worker is doing that. – James F Jul 24 '09 at 16:22
0

Not sure how PHP-FPM differs from the standard PHP in fastcgi mode, but normally, each PHP process will only serve a limited number of requests before terminating. This prevents memory leaks from building up over time. This works really well, unless you only have one PHP process, in which case it runs until it's completed it's quota of requests and then quits. You should look to see if you have a single process running or if you have several. If you have several, then ignore this. If you only have one, you need to make sure that the PHP_FCGI_CHILDREN environment variable is exported before you start PHP. PHP_FCGI_MAX_REQUESTS controls the number of requests each individual process will run.

David Pashley
  • 23,497
  • 2
  • 46
  • 73
  • Hi, thanks. I have tried to export those env variables in the init.d script, but I haven't seen any difference. I am not too sure I have 100% understood how this stuff works. So, each php process will serve a limited number of requests before terminating; but what happens when this limit is reached and a php process terminates? Is a new php process automatically started? – Vito Botta Jul 24 '09 at 13:36
  • BTW in my case I don't even reach that limit, because I have no traffic on the blog I am working on other than mine. That issue happens also if I make a couple of requests/page views and then leave it for a while. Then, as usual, I'll see those 520s :( – Vito Botta Jul 24 '09 at 13:38
  • php-fpm does all the process management (the bits you have to write wrapper scripts for with standard PHP+FCGI) internally, including re-starting children after they exit - unless they exit abnormally or too quickly. The environment variables for the built-in fastcgi support aren't used in PHP-FPM as far as I know. – James F Jul 24 '09 at 13:44