63

I understand thread driven that Apache uses: every connection opens up a thread and when the response is sent, the thread is closed, releasing the resources for other threads).

But I don't get the event driven design that Nginx uses. I've read some basics about event driven design .. but I don't understand how this is used by nginx to handle web requests.

Where can i read and understand how Nginx is handling the connections in an event driven way so I get why it's better, rather than just accepting that event-based design is better than thread-driven design.

bakkal
  • 54,350
  • 12
  • 131
  • 107
never_had_a_name
  • 90,630
  • 105
  • 267
  • 383

1 Answers1

60

Nginx uses the Reactor pattern. Basically, it's single-threaded (but can fork several processes to utilize multiple cores). The main event loop waits for the OS to signal a readiness event - e.g. that data is available to read from a socket, at which point it is read into a buffer and processed. The single thread can very efficiently serve tens of thousands of simultaneous connections (the thread-per-connection model would fail at this because of the huge context-switching overhead, as well as the large memory consumption, as each thread needs its own stack).

Onestone
  • 879
  • 9
  • 7
  • 4
    but if one thread can serve tens of thousands of users, why dont use multiple threads to serve more? or am i getting it wrong. – never_had_a_name Aug 09 '10 at 04:06
  • 2
    Because the reactor has to perform non-threadsafe operations like reading from a socket. Multithreading (a fixed pool of worker threads, e.g. one per CPU) is possible with the Proactor pattern, which works slightly differently - for example the OS places the read data into a buffer for you (you specify the buffer at the start of the asynchronous operation). But the proactor has its own disadvantages - it has to reserve more memory for buffers; it is also slower on Linux when only using a single CPU. – Onestone Aug 09 '10 at 10:42
  • 3
    "if one thread can serve tens of thousands of users, why dont use multiple threads"---threading is a kludge once invented to cut down on costly processes, at the price of increased complexity. the whole point of doing asynchronous I/O is so that you can handle many clients within a single process and get threading out of the window. i'm pretty sure you won't see any performance gains worth the price of threading in the field of asynchronous I/O. – flow Jul 19 '11 at 16:03
  • 4
    What if nginx stands before a python server.. And the python script takes a lot of time to process. http://site.com/long-duration-request Because it is one thread all other threads have to wait causing a huge traffic jam right? So can you really benefit from nginx if your app itself is written with threads in mind? Should you use something like nodejs then? – TjerkW Dec 19 '11 at 13:50
  • 4
    @TjerkW: Nodejs would suffer a similar "problem" because it uses a small number of threads. But none of this is a problem because it is using an asynchronous model where it doesn't sit and wait for the response but instead goes out and serves other requests and then comes back and finishes with a response as soon as the slow operation is ready. – OCDev Nov 08 '12 at 13:19
  • @Onestone , good explanation, thank you. I have one question. Lets suppose we have only one thread (the main single thread), and we have 10 socket connections. How each connection is handled in background (asynchronously) with only one thread ? Is it using separate threads in background (that are not visible for the programmer) for each socket connection ? Thanks! – user345602 Jan 22 '14 at 22:17
  • 2
    @user345602, modern kernels allow one thread to wait for events on multiple sockets at once. See for example https://en.wikipedia.org/wiki/Epoll. – Jack O'Connor May 05 '15 at 20:27
  • @OCDev Nginx uses a single thread to read requests without blocking (epoll/kqueue), that's right. But the single thread looper blocks and waits for response from the backend python or php server, that's a known major drawback of Nginx. e,g. if you don't have enough page buffer nginx is very slow for static files because response from disks is slow. Nginx+ can utilize thread pools for reading static files from your slow disks. – Daniel Jun 23 '15 at 16:04
  • @Onestone Awesome explanation. – Sarath Sadasivan Pillai Jun 24 '15 at 04:37