I have a CentOS VM on which I'm running a php socket server, that forks on every connection. The child process does its job and then exits.
The parent is also waiting to reap the dead zombie process (I have checked the ps auxf
output)
pcntl_wait($status, WNOHANG);
is executed after before every fork, so that cleans up 1 Zombie process (if any)
However, after a long run, the parent is unable to fork.
I think this is because of the 32768
limit.
The number of process in ps -auxf
output are constantly around 750.
[root@test-machine ~]# cat /proc/self/limits
Limit Soft Limit Hard Limit Units
Max cpu time unlimited unlimited seconds
Max file size unlimited unlimited bytes
Max data size unlimited unlimited bytes
Max stack size 8388608 unlimited bytes
Max core file size 0 unlimited bytes
Max resident set unlimited unlimited bytes
Max processes 119925 119925 processes
Max open files 1024 4096 files
Max locked memory 65536 65536 bytes
Max address space unlimited unlimited bytes
Max file locks unlimited unlimited locks
Max pending signals 119925 119925 signals
Max msgqueue size 819200 819200 bytes
Max nice priority 0 0
Max realtime priority 0 0
Max realtime timeout unlimited unlimited us
[root@test-machine ~]# cat /proc/sys/kernel/pid_max
32768
UPDATE:
I checked the logs, and saw the error said PHP Fatal error: Maximum execution time of 30 seconds exceeded in myfile.php
The line it pointed to is the pcntl_wait($status, WNOHANG);
, which is for waiting for a child process to terminate. The WNOHANG
flag ensures that it does not hang, so if theres no exited child, it will continue the execution.
While the "Max Time" can be changed, something is causing pcntl_wait
to loop.
I think something on the OS is making this happen. This is the socket that has maximum connections (I have multiple sockets, only this was affected)