I'm writing a concurrent TCP server that has to handle multiple connections with the 'thread per connection' approach (using a thread pool). My doubt is about which is the most optimal way for every thread to get a different file descriptor.
I found that the next two methods are the most recommended:
- A main thread that
accepts()
all the incoming connections and stores their descriptors on a data structure (e.g.: aqueue
). Then every thread is able to get an fd from the queue. - Accept() is called directly from every thread. (Recommended in Unix Network Programming V1 )
Problems I find to each of them:
- The static data structure that stores all the fd's must be locked (
mutex_lock
) before a thread can read from it, so in the case that a considerable number of threads wants to read in exactly the same moment I don't know how much time would pass until all of them would get their goal. - I've been reading that the Thundering Herd problem related to simultaneous
accept()
calls has not been totally solved on Linux yet, so maybe I would need to create an artificial solution to it that would end up making the application at least as slow as with the approach 1.
Sources:
(Some links talking about approach 2: does-the-thundering-herd-problem-exist-on-linux-anymore - and one article I found about it (outdated) : linux-scalability/reports/accept.html
And an SO answer that recommends approach 1: can-i-call-accept-for-one-socket-from-several-threads-simultaneously
I'm really interested on the matter, so I will appreciate any opinion about it :)