free -m
total used free shared buffers cached
Mem: 7996 2043 5952 0 73 140
-/+ buffers/cache: 1830 6165
Swap: 7812 15 7797
nginx -v
nginx: nginx version: nginx/1.0.0
uname -a
Linux tr1 2.6.38-gentoo-r6 #4 SMP Tue Sep 27 11:24:13 EEST 2011 x86_64 Intel(R) Xeon(R) CPU E5620 @ 2.40GHz GenuineIntel GNU/Linux
cat /proc/version
Linux version 2.6.38-gentoo-r6 (root@tr1) (gcc version 4.4.5 (Gentoo 4.4.5 p1.2, pie-0.4.5) ) #4 SMP Tue Sep 27 11:24:13 EEST 2011
grep directio /etc/nginx/nginx.conf
[nothing]
grep open_file /etc/nginx/nginx.conf
open_file_cache max=2000 inactive=3600s;
[...]
I am not aware of anything disabling the kernel file cache. I added and removed open_file_cache in nginx.conf. Yet, nginx seems to be reading everything directly from disk. We have a handfull of more nginx machines with "identical" nginx configuration (php upstream loadbalance, plus static file delivery) which DO use the kernel open file cache with much less I/O load.
There is also an apache running on the same machine, in this case.
iostat, iotop -o
Usually show permant disk usage (differs from handful of identical nginx load balancers on other machines), nginx being the top I/O (static file delivery)
htop
Gives a pretty picture of the free/buffers/memory situation. And confirms what free is telling. 1GB mem in use by processes. Some 6GB memory unused. Only a small fraction of that remaining memory is used for kernel caching things (htop indicates that as yellow part of memory usage)
We would like to find out why nginx is driving the disk I/O to 100% while there are several Gigabyte of RAM remaining (for kernel open file caching)
PS: As I said, we have a handful of similar php-upstream balancers running, with additional static file delivery by nginx. But only this one goes to high load average due to excessive iowait, slowing down everything else.