3

I have issue about openfiles limit per process on FreeBSD 10.1-RELEASE. To prove it, I make python script to generate dummy files and opened those files. Script available in

To generate 1M files, execute command

# mkdir -p /tmp/1
# python dummyFileGenerator.py -d /tmp/1 -n 1000000 -r 1

Then, load all files by

# python dummyFileLoader.py -d /tmp/1 -n 1000000 -r 1

Open file error show after load 32766 files

retry openfile 32766
Traceback (most recent call last):
  File "dummyFileLoader.py", line 74, in <module>
  File "dummyFileLoader.py", line 59, in openfile
IOError: [Errno 24] Too many open files: '/tmp/1/00/00/7f/fd'

I tried change maxfile but not solved

# sysctl -a|grep maxfiles
kern.maxfiles: 1000000
kern.maxfilesperproc: 1000000

# ulimit -a
cpu time               (seconds, -t)  unlimited
file size           (512-blocks, -f)  unlimited
data seg size           (kbytes, -d)  33554432
stack size              (kbytes, -s)  524288
core file size      (512-blocks, -c)  unlimited
max memory size         (kbytes, -m)  unlimited
locked memory           (kbytes, -l)  unlimited
max user processes              (-u)  6670
open files                      (-n)  58284
virtual mem size        (kbytes, -v)  unlimited
swap limit              (kbytes, -w)  unlimited
sbsize                   (bytes, -b)  unlimited
pseudo-terminals                (-p)  unlimited

Any idea about that?

1 Answers1

0

Did you confirm python is seeing the correct limits also?

import resource; 
print resource.getrlimit(resource.RLIMIT_NOFILE)
Lucas Holt
  • 3,826
  • 1
  • 32
  • 41