Is it possible to increase "Max open files" parameter for working process ? I mean this parameter:
cat /proc/<pid>/limits | grep files
Thanks for your advices
Is it possible to increase "Max open files" parameter for working process ? I mean this parameter:
cat /proc/<pid>/limits | grep files
Thanks for your advices
Another option is to use the prlimit command (from the util-linux package). For example if you want to set the maximum number of open files for a running process to 4096:
prlimit -n4096 -p pid_of_process
As a system administrator: The /etc/security/limits.conf
file controls this on most Linux installations; it allows you to set per-user limits. You'll want a line like myuser - nofile 1000
.
Within a process: The getrlimit and setrlimit calls control most per-process resource allocation limits. RLIMIT_NOFILE
controls the maximum number of file descriptors. You will need appropriate permissions to call it.
You could use gdb, break into the process, call the aforementioned syscalls to raise the limit you're interested in, then continue the job and exit gdb. I've edited things on the fly this way a few times.
Your app wouldn't be down, but just frozen for the moment while you performed the call. if you're quick (or you script it!), it'll likely not be noticeable.
echo -n "Max open files=20:20" > /proc/$pid/limits
...works in RHEL5.5 and RHEL6.7.
Note that the "-n" is mandatory; a trailing newline will generate a complaint about invalid arguments.
This link details how to change this system wide or per user.
Many application such as Oracle database or Apache web server needs this range quite higher. So you can increase the maximum number of open files by setting a new value in kernel variable /proc/sys/fs/file-max as follows (login as the root):
$ sysctl -w fs.file-max=100000
You need to edit /etc/sysctl.conf file and put following line so that after reboot the setting will remain as it is
Yes, it is possible to increase the limits in /proc/<pid>/limits
at run time. Just find the pid and execute below command:
echo -n "Max open files=20:20" > /proc/$pid/limits
The following commands give the max # of open files per process permissible by default limits (soft and hard respectively):
ulimit -Sa
ulimit -Ha
You can use a program or a command to change these limits. Look at ulimit (man ulimit).
On Ubuntu 16.04, with a rethinkdb process running, none of these solutions worked.
I kept getting error: accept() failed: Too many open files.
What ultimately worked was this in my /etc/security/limits.conf
file. Note the nproc in addition to the nofile. As I understand it, root needs to specified separately.
* soft nofile 200000
* hard nofile 1048576
root soft nofile 200000
root hard nofile 1048576
* soft nproc 200000
* hard nproc 1048576
root soft nproc 200000
root hard nproc 1048576
You can see the system max files by running cat /proc/sys/fs/file-max
. I just set mine to a high maximum well within reason for the size of the server.
You can verify the max open files your process is allowed by running cat /proc/{your-pid}/limits
.
Helpful post: https://medium.com/@muhammadtriwibowo/set-permanently-ulimit-n-open-files-in-ubuntu-4d61064429a