-1

Since some days, we're getting "Too Many Open Files" within an Java application while loading files. So i start searching on Google, and increased fs.file-max to 200000 in my /etc/sysctl.conf. After this, i run sysctl -p. But, this not helped. Because when i type cat /proc/sys/fs/file-nr, it returns 2550 0 200000. The first four digits vary, but the 0 is always 0, since i started looking..

What i'm doing wrong over here, or how can i fix this?

I'm running CentOS release 5.9 (final), with one SSD, so i don't think that would be the problem. (Also, he is not fill, weak or anything and he run fine for months now.)

Another thing, but i'm not sure if it has anything with the issue; i'm able to create, destroy and edit files through SSH with nano/rm. And the Java application runs fine before this issue.

Thanks.

Wouter0100
  • 103
  • 3

1 Answers1

0

Forget the OS level fs.file-max, it's the maximum number of file handles your current user/group is allowed to have (per process) you want to edit. See ulimit -n.

You may adjust this limit as root by modifying /etc/security/limits.conf. Also make sure the limit is applied by adding ulimit -SHn 8192 (or whatever limit you wish to have) to your ~/.bashrc.

Janne Pikkarainen
  • 31,852
  • 4
  • 58
  • 81