I am seing a lot of too many open files exceptions in the execution of my program. Typically those occur in the following form:
org.jboss.netty.channel.ChannelException: Failed to create a selector.
...
Caused by: java.io.IOException: Too many open files
However, those are not the only exceptions. I have observed similar ones (caused by "too many open files") but those are much less frequent.
Strangely enough i have set the limit of open files of the screen session (from where i launch my programs) as 1M:
root@s11:~/fabiim-cbench# ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 20
file size (blocks, -f) unlimited
pending signals (-i) 16382
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
**open files (-n) 1000000**
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) unlimited
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
Moreover, as observed by the output of lsof -p
I see no more that 1111 open files (sockets, pipes, files) before the exceptions are thrown.
Question: What is wrong and/or how can i dig deeper into this problem.
Extra: I am currently integrating Floodlight with bft-smart. In a nutshell the floodlight process is the one crashing with too much open files exceptions when executing a stress test launched by a benchmark program. This benchmark program will maintain 64 tcp connections to the floodlight process which in turn should maintain at least 64 * 3 tcp connections to the bft-smart replicas. Both programs use netty to manage these connections.