29

Is it possible to increase "Max open files" parameter for working process ? I mean this parameter:

cat /proc/<pid>/limits | grep files

Thanks for your advices

wako
  • 785
  • 2
  • 7
  • 9
  • Additional info: my process is 'java'. I need to increase “Max open files” without stopping the process. – wako Sep 17 '10 at 12:00

8 Answers8

31

Another option is to use the prlimit command (from the util-linux package). For example if you want to set the maximum number of open files for a running process to 4096:

prlimit -n4096 -p pid_of_process

rkachach
  • 16,517
  • 6
  • 42
  • 66
  • I'm using linux mint (Ubuntu based) and I can't find it in the default installation. Normally it's part of the package util-linux but eve this package on Mint doesn't bring this command. – rkachach Aug 24 '16 at 07:15
  • ➜ ~ lsb_release -d Description: Ubuntu 16.04.1 LTS ➜ ~ whereis prlimit prlimit: /usr/bin/prlimit /usr/share/man/man2/prlimit.2.gz /usr/share/man/man1/prlimit.1.g – t2d Aug 24 '16 at 11:05
  • 1
    Cant find prlimit on CentOS 6. – Arvy Jun 22 '17 at 23:59
  • try to install the util-linux RPM or the equivalente package. – rkachach Jun 23 '17 at 07:49
27

As a system administrator: The /etc/security/limits.conf file controls this on most Linux installations; it allows you to set per-user limits. You'll want a line like myuser - nofile 1000.

Within a process: The getrlimit and setrlimit calls control most per-process resource allocation limits. RLIMIT_NOFILE controls the maximum number of file descriptors. You will need appropriate permissions to call it.

  • It seems that this is what I need. Can I set "setrlimit(RLIMIT_NOFILE,...)" for some outside process? – wako Sep 17 '10 at 13:24
  • I don't know of any. If there is, I suspect you'll find it buried deep in some Linux-specific programming guide, because I can't fathom where else standard POSIX would put it. –  Sep 18 '10 at 21:01
  • 3
    Sometimes `limits.conf` is not taken into account and you have to set `DefaultLimitNOFILE` in `/etc/systemd/system.conf` and `user.conf`. – Xdg Feb 19 '18 at 08:11
6

You could use gdb, break into the process, call the aforementioned syscalls to raise the limit you're interested in, then continue the job and exit gdb. I've edited things on the fly this way a few times.

Your app wouldn't be down, but just frozen for the moment while you performed the call. if you're quick (or you script it!), it'll likely not be noticeable.

spoulson
  • 21,335
  • 15
  • 77
  • 102
lornix
  • 1,946
  • 17
  • 14
  • +1 Great suggestion, [this post](http://superuser.com/a/441758/81173) and [this blog](http://gregchapple.com/updating-ulimit-on-running-linux-process/) describe how to do it. – Steve Kehlet Jul 09 '14 at 17:48
  • Yeah, this is somewhat like tweaking the Matrix to make things work... stop the world, edit a variable, continue the world... oh! DeJaVu! I am the One! (well, when I have gdb!) – lornix Jul 09 '14 at 21:15
  • 4
    On modern Linux distro, you can use `prlimit` – Mircea Vutcovici Jun 13 '16 at 21:19
5
echo -n "Max open files=20:20" > /proc/$pid/limits

...works in RHEL5.5 and RHEL6.7.

Note that the "-n" is mandatory; a trailing newline will generate a complaint about invalid arguments.

5

This link details how to change this system wide or per user.

Many application such as Oracle database or Apache web server needs this range quite higher. So you can increase the maximum number of open files by setting a new value in kernel variable /proc/sys/fs/file-max as follows (login as the root):

$ sysctl -w fs.file-max=100000

You need to edit /etc/sysctl.conf file and put following line so that after reboot the setting will remain as it is

Brian Agnew
  • 268,207
  • 37
  • 334
  • 440
  • Thanks for a link. But I've found it before asking :-) But the problem is to change parameter without stopping process (on runtime) and for process-wide. – wako Sep 17 '10 at 11:59
  • @wako: It can't be done from outside the process (unless you're running one of the very latest development kernels, which is unlikely) – caf Sep 17 '10 at 12:17
  • @caf: Thanks. This is exactly that I've wanted to hear. – wako Sep 17 '10 at 13:20
  • look at @rkachach comment on the above answer. Use the command prlimit. – Ziferius Aug 19 '21 at 20:59
4

Yes, it is possible to increase the limits in /proc/<pid>/limits at run time. Just find the pid and execute below command:

echo -n "Max open files=20:20" > /proc/$pid/limits
filmor
  • 30,840
  • 6
  • 50
  • 48
SecureTech
  • 207
  • 5
  • 12
4

The following commands give the max # of open files per process permissible by default limits (soft and hard respectively):

ulimit -Sa
ulimit -Ha

You can use a program or a command to change these limits. Look at ulimit (man ulimit).

Ravi R
  • 1,692
  • 11
  • 16
3

On Ubuntu 16.04, with a rethinkdb process running, none of these solutions worked.

I kept getting error: accept() failed: Too many open files.

What ultimately worked was this in my /etc/security/limits.conf file. Note the nproc in addition to the nofile. As I understand it, root needs to specified separately.

*                soft    nofile          200000
*                hard    nofile          1048576
root             soft    nofile          200000
root             hard    nofile          1048576
*                soft    nproc           200000
*                hard    nproc           1048576
root             soft    nproc           200000
root             hard    nproc           1048576

You can see the system max files by running cat /proc/sys/fs/file-max. I just set mine to a high maximum well within reason for the size of the server.

You can verify the max open files your process is allowed by running cat /proc/{your-pid}/limits.

Helpful post: https://medium.com/@muhammadtriwibowo/set-permanently-ulimit-n-open-files-in-ubuntu-4d61064429a

Nick Woodhams
  • 11,977
  • 10
  • 50
  • 52