I was running into problem with too many open files with Tomcat7 on Ubuntu 12, so I increased the hard and soft limit on the number of open files from 4096 and 1024, respectively, to 16384. Now I'm not getting any more errors about open files, but overall CPU% seems to have risen. Does increasing the number of max files also have some cost in CPU time? If not, why not set the ulimit extremely high?
-
1I regularly set my open file descriptor limits to a million. I have never noticed any performance impact (save, of course, that my program starts to work again.) – phs Feb 06 '14 at 01:52
1 Answers
The entire reason ulimit exists is to protect the overall performance of the system by preventing a process from using up more resources than are "normal".
"Normal" can be different depending on what you are doing, but setting limits extremely high by default would defeat the purpose of ulimit and allow any process to use up ridiculous amounts of resources. On a server without users this is less critical than a big multiuser environment, but its still a useful safeguard against buggy or exploited processes.
Your CPU probably just went up because your computer is doing more work now instead of erroring out.
PS - You want to be sure there isn't something wrong in your tomcat environment too... it might be OK to have thousands of open files, I don't know your application, but that could also be a sign of something gone buggy. If it is, you just allowed the bug's effect to become potentially much worse :( If you can explain why tomcat needs thousands of files open, cool, but if not... yikes.

- 184
- 4
-
Yes, it is expected to have thousands of open connections - we typically have 2000-3500 simultaneous users, with spikes of 4500. We do long-lived connections with 30-second keepalive times to avoid the overhead of repeated SSL handshakes. Thanks for your answer! – Jesse Barnum Feb 06 '14 at 13:51
-
thousands of open sockets/concurrent users doesn't have to equal thousands of open files, but if it does in your application then that would explain why the default limit of 4096 caused issues. if the problem doesn't come back again until you hit 16000 users, I would not worry about it. – user3109924 Feb 06 '14 at 21:05
-
@user3109924 My understanding is a socket is nothing but a special **file** used for communication purposes. You create a socket in Unix, you get a file descriptor as a return value. So I don't understand why open sockets/conns doesnt mean open file descriptors?? Do webservers solving the 10K problem have more than 10K open file descriptor limit?? – Mikki Oct 27 '16 at 10:05