I'm administering some Linux servers that are running an httpd based application developed by one of our apps teams. This is in EC2 and each host has 32GB of RAM. I've noticed that there is about once a day an OOM event on each of these hosts and httpd ends up being killed. I've also noticed that httpd has mlocked 23GB of it memory, all of it in the File Cache. My question is whether there is any circumstance under which it is a good idea for a process to mlock all of it's mmaped files. I can't think of why this is a good idea. I assume the developers are doing this to ensure that their actively written files remain in memory, but Linux itself should not evict these anyway unless it gets down to the wire. By mlocking everything, httpd ends up being the first to get killed by oom-killer. I brought this up with the devs, and their response was "it's ok, it gets restarted automatically." .... But if httpd keeps getting killed (thereby evicting everything and causing the system to probably have to page back in some of its files), doesn't this defeat the purpose of the mlock in the first place?
Asked
Active
Viewed 426 times
1
-
Can you run some sort of memory profiler on the app to get a better idea? What sort of file system are you using? Is it NFS? Then look into EnableMMAP settings. There is also a MMapFile directive in mod_file_cache. – Tux_DEV_NULL Dec 08 '17 at 09:07
-
Nooooo... definitely not NFS. This is a high performance cluster that processes a lot of data coming over the wire. – Michael Martinez Dec 08 '17 at 14:10
-
`mlock` is used to avoid page faults so it is mainly for performance reason. It really depends on how important the delay incurred by page faults affects your application. `httpd` doesn't sound very performance critical to me. Laser arm related applications might need to `mlock` everything. – HCSF Jan 22 '20 at 03:25
-
But even if you don't mlock, Linux's memory management will make sure to avoid page faults anyway. – Michael Martinez Jan 22 '20 at 19:42