I have issue about openfiles limit per process on FreeBSD 10.1-RELEASE. To prove it, I make python script to generate dummy files and opened those files. Script available in
- Generator: https://gist.github.com/juniorh/ef9273911dee551f1048
- Loader: https://gist.github.com/juniorh/3b2fb0a80cddb8e407b3
To generate 1M files, execute command
# mkdir -p /tmp/1
# python dummyFileGenerator.py -d /tmp/1 -n 1000000 -r 1
Then, load all files by
# python dummyFileLoader.py -d /tmp/1 -n 1000000 -r 1
Open file error show after load 32766 files
retry openfile 32766
Traceback (most recent call last):
File "dummyFileLoader.py", line 74, in <module>
File "dummyFileLoader.py", line 59, in openfile
IOError: [Errno 24] Too many open files: '/tmp/1/00/00/7f/fd'
I tried change maxfile but not solved
# sysctl -a|grep maxfiles
kern.maxfiles: 1000000
kern.maxfilesperproc: 1000000
# ulimit -a
cpu time (seconds, -t) unlimited
file size (512-blocks, -f) unlimited
data seg size (kbytes, -d) 33554432
stack size (kbytes, -s) 524288
core file size (512-blocks, -c) unlimited
max memory size (kbytes, -m) unlimited
locked memory (kbytes, -l) unlimited
max user processes (-u) 6670
open files (-n) 58284
virtual mem size (kbytes, -v) unlimited
swap limit (kbytes, -w) unlimited
sbsize (bytes, -b) unlimited
pseudo-terminals (-p) unlimited
Any idea about that?