12

I've found a number of articles describing how to increase the limits for the number of open files through /etc/security/limits.conf, but I don't understand the impact of doing so. Many times I see people updating 1024 to 2048. Ok, those file handles must cost RAM or something. Why not increase it to 100000? What resource am I eating up with open files?

A question about how to increase the limits: https://stackoverflow.com/questions/34588/how-do-i-change-the-number-of-open-files-limit-in-linux

Adam
  • 276
  • 1
  • 3
  • 8
  • please note that on Debian squeeze/wheezy /etc/security/limits.conf will not work for system startup as is explained in http://superuser.com/a/459183/31281 and https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=302079 – ZaphodB Feb 21 '14 at 23:23

2 Answers2

14

This is the limit on the number of files that a single process can have open at a time. Sockets, pipes, and terminals count too. There is almost no software in existence that can handle more than about 20,000 files open at a time, so there's no point in setting the limit higher than that.

David Schwartz
  • 31,449
  • 2
  • 55
  • 84
  • 1
    Excellent, thanks. I understood that these were limits, and I guess I was wondering what they were protecting against. We have a server hosting some very active apps that is pushing the 16k limit, so I wanted to check what would be the impact if we just let it continue to expand. I'll turn my attention to why we have so many darn files then. Thanks again. – Adam Feb 21 '14 at 19:34
  • @Adam If the number is growing because they're legitimately using more network connections or files, then you can keep raising it, at least until you hit the limit of what the software can handle. But if it's a file descriptor leak, that should be investigated and fixed. – David Schwartz Feb 21 '14 at 19:51
  • I don't believe it's a leak since, after peaking, the open files count from lsof seems to drop back down to more modest numbers. Thanks for the feedback. – Adam Feb 24 '14 at 22:16
  • @Adam That suggests strongly that it's legitimate usage and thus that raising the limit is appropriate. – David Schwartz Feb 25 '14 at 00:02
  • BTW, when you say that there is "no software in existence" do you mean no modern operating systems? – Adam Mar 10 '14 at 16:41
  • The operating systems can, it's the applications that get into trouble. – David Schwartz Mar 11 '14 at 02:09
  • Ok, well for future reference for users, we increased to 32k and Jenkins is using them w/o issue during builds. I still don't fully understand what this limit is for or why the default is so low, but I haven't found the *actual* hardware limit yet. – Adam Mar 11 '14 at 17:04
  • 3
    Update from the future, Nexus 3 now has a check and complains if you have less than 64k. – Adam Sep 19 '17 at 21:16
  • 3
    Note that some heavy-IO processes like Kafka may require more than that. For example, typical nofile max values are [recommended](https://docs.confluent.io/current/kafka/deployment.html#file-descriptors-and-mmap) above 100,000 – xmar Aug 07 '18 at 16:00
  • 4
    The 20K number is seriously out of date now. :) – David Schwartz Aug 08 '18 at 00:44
  • Answer is outdated, look at modern databases - for example clickhouse. I think it should be updated. – Sonique Jun 16 '20 at 07:26
  • ElasticSearch recommends a production nofile limit of 65,535 – James McGuigan Jun 01 '22 at 01:14
2

Just like to add that value of nofiles depend on "/proc/sys/fs/nr_open" as mentioned above and ulimit uses setrlimit() to set resource limit.

Prashant Lakhera
  • 713
  • 2
  • 10
  • 25