0

Is there a way of calculating some kind of optimal level of maximum file handles for a Linux system?

The reason for why I'm asking is that I'm running a bunch of dockerized elasticsearch-containers and noticed that the host sever has more than 2.000.000 file handles open when running 'lsof'. Is this even a problem? Or is that approach way to blunt?

I'm guessing that certain parts of the server, the filesystem for example, can handle more open file handles than the network adapter?

Hope the main question is clear, tried to search but found no answer. Thanks in advance.

Charles Duffy
  • 280,126
  • 43
  • 390
  • 441
DrWily
  • 1
  • 1
  • 1
    Is this a software development question or a system administration question? The latter would be better fit for [unix.se] or [ServerFault](https://serverfault.com/) – Charles Duffy Feb 27 '19 at 22:05
  • 1
    ...backing up, is there an actual *problem* you were investigating when you ran across this, or is it something closer to idle curiosity? – Charles Duffy Feb 27 '19 at 22:06
  • 1
    Also, if you're just counting lines in `lsof` output, it's very likely you're seeing each thread's copy of each handle as if it were a separate handle, and getting a dramatic overcount in response (by a factor of hundreds if your thread pools are large). – Charles Duffy Feb 27 '19 at 22:07
  • 2
    Welcome to StackOverflow! Are you aware `ulimit`, `sysctl`? @CharlesDuffy I suppose that OP is a developer who don't know basics of system administration but I agree with you tha this question is better suited for ServerFault/PowerUser/etc – Alex Yu Feb 27 '19 at 22:10
  • There are no separate limits for filesystem, network, etc. There's one big open file table in the kernel, and each process has a file descriptor table, where each descriptor is a reference to an open file. – Barmar Feb 27 '19 at 22:10
  • [here](https://www.usna.edu/Users/cs/aviv/classes/ic221/s16/lec/21/lec.html) are notes from a lecture that explains the file table. – Barmar Feb 27 '19 at 22:11
  • Thanks for the welcome and the replies. The problem was that the server appeared sluggish at times and I'm trying to find out the root cause. I am aware of ulimit and sysctl, however I'm no expert. @CharlesDuffy the information about duplicates was interesting, I always felt that lsof just listed everything undee /proc/. Barmar, great link, thanks! Im updating the main question with an answer I found. – DrWily Feb 28 '19 at 06:59
  • @DrWily, you're right, the content is also reflected under proc, but if two processes are forked from each other, they can have two separate handles pointing to the same kernelspace object, so counting proc entries doesn't tell you an accurate result as to how many distinct handles there are in the kernel; instead, it tells you how many *file descriptors* processes have pointing to those handles. – Charles Duffy Feb 28 '19 at 13:46
  • @DrWily, answers don't go in questions. Use the "Add an Answer" button if you want to add an answer yourself -- that way it's subject to all the same rules and voting procedures as any other answer (and you can get credit from having it upvoted, separate and apart from the question). – Charles Duffy Feb 28 '19 at 14:10
  • (Moreover, file descriptors can be sent between processes over UNIX domain sockets, so you can have multiple processes sharing descriptors pointing to the same handle even *without* a parent/child relationship if the developer was crafty enough; this is how some daemons, like haproxy, are able to be upgraded to a new software release without dropping connections). – Charles Duffy Feb 28 '19 at 16:15

0 Answers0