Since some days, we're getting "Too Many Open Files" within an Java application while loading files. So i start searching on Google, and increased fs.file-max
to 200000
in my /etc/sysctl.conf
. After this, i run sysctl -p
. But, this not helped. Because when i type cat /proc/sys/fs/file-nr
, it returns 2550 0 200000
. The first four digits vary, but the 0 is always 0, since i started looking..
What i'm doing wrong over here, or how can i fix this?
I'm running CentOS release 5.9 (final)
, with one SSD, so i don't think that would be the problem. (Also, he is not fill, weak or anything and he run fine for months now.)
Another thing, but i'm not sure if it has anything with the issue; i'm able to create, destroy and edit files through SSH with nano/rm. And the Java application runs fine before this issue.
Thanks.