Running apache 2.0 on ubuntu 14.04 (MPM prefork), I'm seeing multiple child workers with variable "RES" megabyte values (as expected). While I have plenty of RAM to handle all of these processes (i.e. optimization at this point to limit the number of workers or specifying the time out, won't make much overall difference), I have a request which I know is very memory demanding. Despite this, the % memory of the server when this request is running never exceeds 24%. Increasing the number of allowed workers increases the % memory used (since more requests are still open) therefore it seems like there's a maximum Mb size for each individual request. Is this possible or am I missing something?
As you can see here: Total memory used, total memory used by apache is 910 mb (out of 1.4 gb allocated to it).
Yet each memory-demanding child is using max 227 mb; which is not enough for each individual process resulting in very long processing times: Bottleneck
UPDATE:
I've realized that I haven't included the whole story here and as it is, it might not be an apache issue per se. I'm using apache with mod_wsgi supporting a flask framework that is in its own virtual environment VirtualEnv python package. Maybe the issue I'm seeing is because the memory is limited for the virtual environment? I've looked this up but doesn't seem like such limit exists; at least not by default.