I'm running a production Django site on Ubuntu Linode w/ 4GB RAM. Major processes are Apache2, MongoDB, Memcache, PostgreSQL, Tomcat6 and Redis. Apache OOMs about 10 times a day. I've tweaked values in apache2.conf many times and seen no effect. There is no obvious correlation between number of requests and memory spikes or the path of requests and memory spikes. I say 'spikes' because normally Apache consumes very little memory, then suddenly in one second it jumps to 3.5GB and gets killed by the Kernel. I've not been able to artificially trigger the spikes using JMeter (load testing software), normally memory consumption under load is quite low and stable.
24 hour graph of memory usage (from Linode Longview): https://i.stack.imgur.com/Heuax.png
It also looks like memory usage is also slowly climbing.
From syslog:
kernel: apache2 invoked oom-killer:
...
kernel: 11705 total pagecache pages
kernel: 5472 pages in swap cache
kernel: Swap cache stats: add 76719087, delete 76713615, find 92563708/94246314
kernel: Free swap = 0kB
kernel: Total swap = 2097148kB
kernel: 1050623 pages RAM
kernel: 43278 pages reserved
kernel: 788996 pages shared
kernel: 999768 pages non-shared
...
kernel: [ pid ] uid tgid total_vm rss nr_ptes swapents oom_score_adj name
kernel: [ 3709] 1000 3709 3706586 889237 7117 464598 0 apache2
...
kernel: Killed process 3709 (apache2) total-vm:14826344kB
Current apache2.conf:
Timeout 30
KeepAlive Off
<IfModule mpm_prefork_module>
StartServers 3
MinSpareServers 2
MaxSpareServers 5
MaxClients 10
MaxRequestsPerChild 1000
</IfModule>
Switching to Nginx is not an option. Most of the time the OOMs don't kill the system but every couple of weeks it does and the server requires a restart. A: What might be causing this? B: What steps have I not done yet to diagnose the true cause?