0

I have an application internal to my company that needs to be very fast per client, because the clients are so limited and the whole thing is internal.

So, the client is expected to send so many concurrent requests and what I want nginx to do is putting as much parallelism into that as possible. Yes, that's exactly the opposite of what people normally do. Wherever I searched, people are asking of limiting connections per IP, thus mitigating attacks, etc. I want the exact opposite.

I can't really understand what's happening in nginx regarding that. What I tried to do is testing its behavior by having a simple server that just sleeps for 10 seconds and returns a string, and I curled it from 5 different terminals. The numbers where 10 seconds, 18 seconds, 21 seconds, 25 seconds, 30 seconds. So there is some sort of queuing happening, but not really sequential (otherwise it was supposed to be 10 seconds, 20 seconds, 30 seconds, 40 seconds, 50 seconds). But, it's not fully parallel either!

I hope this clarifies what I want... please advise.

OmarOthman
  • 107
  • 1
  • 6
  • How did you test that sleep? Nginx happily accepts any reasonable number of connections. – Alexey Ten Apr 16 '14 at 17:26
  • Well, it did accept all of them, but the responses were slow as you see form the numbers. Basically what I want is all of them returning after 10 seconds. – OmarOthman Apr 16 '14 at 17:37
  • I've just checked with [echo_sleep 3;](http://wiki.nginx.org/HttpEchoModule#echo_sleep) and 100 request. All 100 return in 3 sec. I guess, that nginx is almost never bottleneck, but it's your PHP/Java/whatever. – Alexey Ten Apr 16 '14 at 17:39
  • Could you post what you did exactly as an answer? – OmarOthman Apr 16 '14 at 17:45

1 Answers1

3

Nginx allows any reasonable number of request from same IP. Just checked with this config:

server {
    listen 3333;

    default_type text/plain;

    location / {
        echo_sleep 3;
        echo 'Hi';
    }
}

and script:

for ((i = 0; i < 100; i++)); do time curl -s localhost:3333  &  done

I guess, that you've test with PHP or some other server behind nginx and it's that server can't handle so many requests.

Alexey Ten
  • 8,435
  • 1
  • 34
  • 36
  • Do you remember what you did in order to get that echo_sleep to work? I've downloaded OpenResty and installed it and things are still not recognized in nginx. – OmarOthman Apr 16 '14 at 18:15
  • I use Ubuntu 12.04 and nginx-full package. It have compiled in Echo module. – Alexey Ten Apr 16 '14 at 18:24
  • updated with full config. OpenResty should have this module too. – Alexey Ten Apr 16 '14 at 18:35
  • Well, I've just realized that I have to be more specific. I didn't mean only the same IP, but also the same client (basically imitating a browser in the sense of sending an authentication cookie or the like). So, your approach might end up testing how many connections `nginx` can handle simultaneously, which is not what I want. I want *from the same client* and not only *from the same IP*. – OmarOthman Apr 17 '14 at 17:28
  • But I upvoted your answer anyway, since I learned something from it. The nice thing is that `nginx` also worked fine when I did a test page that sent 100 AJAX requests to a server that sleeps for 3 seconds just like yours (also all of them returned after 3 seconds). I should have done that from the beginning, but since I'm still a beginner regarding Web development I didn't think of it from the beginning. – OmarOthman Apr 17 '14 at 17:30
  • I'm still investigating why the `curl` test was not fully parallel, though. – OmarOthman Apr 17 '14 at 17:31
  • Browsers have build-in limitations on how many request to send simultaneously to one server ([less than 10](http://stackoverflow.com/a/985704/1016033) so you should consider to add aliases to your server in order to increase that limit. – Alexey Ten Apr 17 '14 at 17:36