You need to ensure that you close()
the files after use. They will be closed by the garbage collector (I'm not completely sure about this, as it can differ on each implementation) but if you process many files without closing them, you can run out of file descriptors right before the garbage collector has any chance to execute.
Another thing you can do is to use a resource based try
statement, that will ensure that every resource you declare in the parenthesis group is a Closeable
resource that will be forcibly close()
d on exit from the try
.
Anyway, if you want to rise the value of the maximum number of open files per process, just look at your shell's man page (bash(1) most probably) and search in it about the ulimit
command (there's no ulimit
manual page as it is an internal command to the shell, the ulimit
values are per process, so you cannot start a process to make your process change it's maximum limits)
Beware that linux distributions (well, the most common linux distributions) don't have normally a way to configure an enforced value for this in a per user way (there's a full set of options to do things like this in BSD systems, but linux's login(8)
program has not implemented the /etc/login.conf
feature to do this) and rising arbitrarily this value can be a security problem in your system if it runs as a multiuser system.