Lets say I have a simple nginx config that talks to a uwsgi backend:
server {
listen 9900 default_server;
listen [::]:9900 default_server;
location / {
include uwsgi_params;
uwsgi_pass unix:/tmp/service-foo.sock;
}
}
This service has a certain subset of urls (/renderer/...
) that are always enormously slow, and under heavy load the entire site goes down.
What I want is to replace this with two copies of the backend, like this:
server {
listen 9900 default_server;
listen [::]:9900 default_server;
location / {
include uwsgi_params;
uwsgi_pass unix:/tmp/service-foo.sock;
}
location ~ ^/renderer/[0-9]+/ {
include uwsgi_params;
uwsgi_pass unix:/tmp/service-renderer.sock;
uwsgi_read_timeout 30s;
uwsgi_send_timeout 30s;
uwsgi_request_buffering on;
}
}
...my naive expectation was that this would fix the problem by allowing the /renderer/...
requests to be served slowly one at a time while the rest of the site remained responsive.
However, it didn't work.
It seems like nginx is serving requests into both locations from the same process and eventually the server justs sits there blocking with all the requests sitting in the second uwsgi instance, and the first uwsgi instance doing nothing at all.
I read up about the nginx thread_pool
directive which looks almost exactly like what I want (ie. a specific reserved thread pool for the renderer location), but this doesn't seem to be supported by uwsgi, it's just for file io.
Is there any way of doing what I want at the nginx level?