4

From my limited understanding of nginx I know that nginx seperates itself from Apache by using a single thread that handles all requests instead of Apache which throws threads at the problem. In theory with a bunch of small requests its faster. But what about long running requests.

Lets say a user is downloading a large file or there's some long running PHP script that's slow because of something its depending on (disk IO, database) is slow. With Apache everything has its own thread so while PHP is waiting for a response from the database another request can come in and be simultaneously processed. With nginx however, wouldn't something like that lock the thread and therefor the whole server? I know that you can have multiple nginx processes but creating more processes for just file downloads just seems like trying to recreate Apache.

I know I'm missing something here as nginx handles situations like this, but what? How does nginx do this with its threading model?

And before you say it, this isn't a duplicate of this question as it only talks about incoming connections

Community
  • 1
  • 1
TheLQ
  • 14,830
  • 14
  • 69
  • 107
  • possible duplicate of [How does Nginx handle HTTP requests?](http://stackoverflow.com/questions/3436808/how-does-nginx-handle-http-requests) - HTTPD serves only incomming connections. – hakre Jul 25 '11 at 18:35
  • @hakre Please read the last sentence. That question only talks about incoming data (incoming buffer is full, read and process). I'm explicitly talking about output when something nginx is relying on is slow. – TheLQ Jul 25 '11 at 18:38
  • 1
    It's called `IO` for a reason. The underlying operating system is actually sending the data to the client, not nginx - in case of large files especially. And well, read actually the other answer, it explains quite well why a single thread in NGINX is not blocked by a single IO operation. Incomming or Outgoing doesn't make a difference here. – hakre Jul 25 '11 at 18:41
  • @hakre Do worker processes bypass nginx as well and send data directly to the client? And how does nginx tell the OS to send a file to a client, that doesn't seem like an operation that the OS would have baked in – TheLQ Jul 25 '11 at 18:44
  • They don't bypass NGINX, it's just that NGINX work on top of the OS. If you send data to a network interface, it's not that you need to wait until it got through. Read the other answer and understand it first please. It helps you to clarify your question as well. – hakre Jul 25 '11 at 18:51
  • Just reading that wait times can occur if multiple workers need to access the same disk simultaneously. For more of an insight, I found this posting: http://www.remsys.com/nginx-on-1gbps – hakre Jul 25 '11 at 19:32

1 Answers1

1

Worker processes in nginx can handle multiple incoming and outgoing requests simultaneously. The answer to the question you linked (3436808) is also applicable to this question.

Community
  • 1
  • 1
  • What about large file downloads though? Is that delegated to a worker process. And see the comment above I said to @hakre about the question, that question only talks about incoming data (buffer is full, read and process). I'm wondering about outgoing data when something nginx is relying is slow. – TheLQ Jul 25 '11 at 18:39
  • Yes, large downloads are handled asynchronously as well. –  Jul 25 '11 at 18:52