0

After having some issues I have rebuilt my server on a new, clean Fedora 24 platform. It's a fairly busy server, and now when it starts up, I get a flood of these messges in apache's error_log:

[Thu Dec 08 19:30:26.954314 2016] [mpm_prefork:error] [pid 379] (11)Resource temporarily unavailable: AH00159: fork: Unable to fork new process
[Thu Dec 08 19:30:36.957269 2016] [mpm_prefork:error] [pid 379] (11)Resource temporarily unavailable: AH00159: fork: Unable to fork new process
[Thu Dec 08 19:30:46.963876 2016] [mpm_prefork:error] [pid 379] (11)Resource temporarily unavailable: AH00159: fork: Unable to fork new process
[Thu Dec 08 19:30:56.967167 2016] [mpm_prefork:error] [pid 379] (11)Resource temporarily unavailable: AH00159: fork: Unable to fork new process
[Thu Dec 08 19:31:06.974127 2016] [mpm_prefork:error] [pid 379] (11)Resource temporarily unavailable: AH00159: fork: Unable to fork new process

I have tried tweaking and tuning, and nothing seems to resolve the issue. I am using the exact same machine that worked fine under Fedora 23, so I know it can handle the load.

Here is my apache server-status:

Apache Server Status for example.com (via x.x.x.x)

Server Version: Apache/2.4.23 (Fedora) OpenSSL/1.0.2j-fips PHP/5.6.28
Server MPM: prefork
Server Built: Jul 18 2016 15:38:14
Current Time: Thursday, 08-Dec-2016 19:38:57 UTC
Restart Time: Thursday, 08-Dec-2016 19:29:02 UTC
Parent Server Config. Generation: 1
Parent Server MPM Generation: 0
Server uptime: 9 minutes 55 seconds
Server load: 2.86 2.38 1.48
Total accesses: 13045 - Total Traffic: 112.5 MB
CPU Usage: u485.32 s25.57 cu.05 cs.03 - 85.9% CPU load
21.9 requests/sec - 193.6 kB/second - 8.8 kB/request
165 requests currently being processed, 0 idle workers
KKKKWKKKKKKKKKKKKKKWKKKKKWKKKKKWWKKKKKKKKKKWKKKWKKKWKKKKKKKWWKKW
WKKKKKKKKKKWKKKKKKWKKKWKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKWKKK
KKKKKKKWWKKKKWKKKKKKKKWKKKKKKKKKKKWKW...........................
................................................................
................................................................
................................................................

...and it goes on from there. There are lots of open slots, but something on the server is preventing new processes from starting up and handling the load. However my ulimits are set high - probably too high!

# ulimit -a
core file size          (blocks, -c) unlimited
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 1546671
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 102400
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) 1546671
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

For completeness, here are my limits.conf settings:

*            soft    core            unlimited
*            soft    nofile          102400
*            hard    nofile          152400
*            soft    sigpending      1546671
*            hard    sigpending      2046671
*            soft    stack           10240
*            hard    stack           14240
*            soft    nproc           1546671
*            hard    nproc           2046671

And here are my apache mpm-worker settings - again, probably too high, but with all of these set lower (or to the defaults) the problem exists as well, often much faster.

ServerLimit       8192
StartServers        40
MinSpareServers     25
MaxSpareServers    100
MaxClients        8192
MaxRequestsPerChild 10000

Clearly something is still limiting new processes from starting, but I'm stumped as to where to look next.

Any advice is appreciated, as always!

Thanks, Mike

Mike Bobbitt
  • 137
  • 2
  • 8

2 Answers2

-1

We had a similar Problem. It seams that systemd was responsible for our limitation.

https://www.freedesktop.org/software/systemd/man/systemd.resource-control.html#TasksMax=N

JSiegele
  • 1
  • 1
-1

Tentative solution is to add the following to httpd.conf:

EnableMMAP Off

This disables memory mapping, which seems to have had a very adverse affect on the server.

For details see: http://httpd.apache.org/docs/2.4/mod/core.html#enablemmap

If this turns out not to be the solution, I will update folks here.

Mike Bobbitt
  • 137
  • 2
  • 8
  • Well, it seems to have helped, but is not the solution. Now tinkering with Apache mpm-worker tuning and system limits. – Mike Bobbitt Dec 09 '16 at 20:17