I'm writing a piece of server software in C++ and intend on having each instance manage 10k-20k simultaneous connections. I already have it functioning and able to handle plenty of remote connections per second that instantly close themselves, however if the server ever passes 1024 simultaneous connections, it will choke, as it's limited to that many open file descriptors.
I have viewed many solutions posted around the web, and I am getting rather confused. I have done everything that people have said to do, and have yet to find a solution that works for my application.
I am specifying that it is my 'custom application' because almost every single process under my account on the system has a limit of 40k file descriptors, with the sole exception of one thing, which is my program, which is still at the basic 1024/4096 file descriptor limit, even though everything says that it really should be at 40000/40000.
/etc/security/limits.conf
contains the line * - nofile 40000
ulimit -n
prints 40000
cat /proc/sys/fs/file-max
prints 100000
cat /proc/[application pid]/limits
says that the file descriptor limit is soft: 1024 / hard: 4096
cat /proc/[application's parent pid]/limits
says that the file descriptor limit is soft: 40000 hard: 40000
regardless of how I start the program (via xterm, tty1, bash, sh, cinnamon, etc).
I have even followed the instructions here and modified my various headers that define __FD_SETSIZE
and changed them all to 40000
I've been trying to solve this issue for some time and any help would be appreciated. The distribution I'm using is Linux Mint 17.2, kernel version 3.16.0-38-generic, and my g++ is version 4.8.4.