0

I have a few servers wherein the ulimit is set as 65536 while in some 200000.Not sure depending on what parameter it is taken, referred the question given on [On Linux - set maximum open files to unlimited. Possible?

But still not clear, what decides the max limit of File descriptors that we can set? Does it depend on memory? OS version or can it be anything. Or can I put any random number above 1024?

Community
  • 1
  • 1
Niceha
  • 384
  • 2
  • 12
  • And because we can edit /proc/sys/file-max as well so no way I can find what is the max file descriptors limit that I can set and system can support without any crash – Niceha Feb 19 '17 at 19:10

1 Answers1

0

According to the proc(5) manpage, the fs.file-max sysctl applies to the number of open files for all processes. Privileged processes (CAP_SYS_ADMIN) can override the file-max limit, though.

To set the per-process limit you must run setrlimit(2) with RLIMIT_NOFILE. Be aware of the differences between the soft limit and the hard limit.

Ricardo Branco
  • 5,740
  • 1
  • 21
  • 31
  • my question is what exactly can be the limit which i can set and on what parameters it depends on? As I can interpret you mean to say running the setrlimit command will get me the max number, is it? because I found no such command – Niceha Feb 21 '17 at 14:17
  • setrlimit() is a syscall. The command (builtin from the shell) is ulimit. To view the soft limits: `ulimit -a`. To view the hard limit: `ulimit -Ha`. You may raise the soft limits to the hard limits. Only the superuser can raise its hard limits. To set the limits for the user you have to edit /etc/security/limits.conf. Look for "VFS: file-max limit reached" error messages in syslog if the limits are too small. The initial values depend on the amount on available memory. Run `help ulimit` to see the flags used to set each limit. – Ricardo Branco Feb 21 '17 at 15:07