1

Lets say I have a simple nginx config that talks to a uwsgi backend:

server {
    listen 9900 default_server;
    listen [::]:9900 default_server;
    location / {
            include         uwsgi_params;
            uwsgi_pass      unix:/tmp/service-foo.sock;
    }
}

This service has a certain subset of urls (/renderer/...) that are always enormously slow, and under heavy load the entire site goes down.

What I want is to replace this with two copies of the backend, like this:

server {
    listen 9900 default_server;
    listen [::]:9900 default_server;

    location / {
            include         uwsgi_params;
            uwsgi_pass      unix:/tmp/service-foo.sock;
    }
    location ~ ^/renderer/[0-9]+/ {
            include         uwsgi_params;
            uwsgi_pass      unix:/tmp/service-renderer.sock;
            uwsgi_read_timeout      30s;
            uwsgi_send_timeout      30s;
            uwsgi_request_buffering on;
    }
}

...my naive expectation was that this would fix the problem by allowing the /renderer/... requests to be served slowly one at a time while the rest of the site remained responsive.

However, it didn't work.

It seems like nginx is serving requests into both locations from the same process and eventually the server justs sits there blocking with all the requests sitting in the second uwsgi instance, and the first uwsgi instance doing nothing at all.

I read up about the nginx thread_pool directive which looks almost exactly like what I want (ie. a specific reserved thread pool for the renderer location), but this doesn't seem to be supported by uwsgi, it's just for file io.

Is there any way of doing what I want at the nginx level?

Doug
  • 247
  • 2
  • 5
  • 10
  • did you restart nginx and check where requests landed? – Alexey Ten Jun 28 '17 at 08:48
  • Definitely restarted nginx; whether there's something else wrong is another question (clearly yes...) but I'm pretty sure its dispatching correctly? If you `tail -f` the two logs you see them both firing away happily until you start hitting some slow urls on `/renderer/...` and then the requests on the other log just slowly dry up until nothing is happening at all until the long running requests end (I've artificially triggered this by just putting `sleep(X)` in the renderer requests). – Doug Jun 28 '17 at 09:01

0 Answers0