1

I have a simple UWSGI app put behind a LB with the following .ini config

[uwsgi]
socket=0.0.0.0:5071
chdir = src/
wsgi-file = uwsgi.py
processes=2
threads=1
protocol=http
plugins=python
exit-on-reload=false
master=true
# Cleanup of temp files
vacuum = true    

When all 2x1 threads are busy, the application keeps serving incoming connections by queueing them, waiting for a thread to free.

This is somehow an unwanted behavior in my case, as I would like UWSGI to return a 5xx status code which will allow me to not oversaturate the resources of a single distribution.

Client testing code

Attaching the test client code for the UWSGI application

proxy = {
    'http':'http://localhost:5071'
}

@threaded
def f():
    print('Sending request')
    response = requests.get('http://dummy.site',proxies=proxy)
    print(str(response.status_code )+ response.text)

for i in range(5):
    f()

Test (1)

Adding listen = 2 to .ini and firing 3 requests simultaneously would just print:

*** uWSGI listen queue of socket "0.0.0.0:5071" (fd: 3) full !!! (3/2) ***

while the third connection seems to still be somehow accepted, queued and later executed instead of a 5xx error being thrown.

Test (2)

Adding listen = 0 to .ini and firing 5 requests simultaneously would just execute two requests at a time. The full queue output is not showing anymore. Somehow, the requests are somewhere still queued and executed when threads get freed.

How can I block incoming connections to the UWSGI application when all threads are busy?

Constantin
  • 131
  • 2
  • Your configuration has a different port and listen queue than the logged message. Are you running two instances and checking a different than you meant to? Also, *almost* every use case works better with (at least some small) listen backlog - when done do check if your performance metrics really match up with your expectations. – anx Jun 14 '21 at 19:56
  • @anx just a mistake as I switched ports while writing the question. Regarding the backlogging, which option(s) do you particularly refer to? – Constantin Jun 14 '21 at 20:02
  • 1
    Is the client you are using to test this maybe **retrying** *after* uwsgi turns down the connection attempt once? Maybe your configuration worked, but your test method did not? – anx Jun 15 '21 at 03:14
  • @anx It is not retrying at all, using simple `requests.get(url,proxies)` in python – Constantin Jun 15 '21 at 11:30

1 Answers1

0

This is a truly bizarre request, but if you really want to do this, you can try reducing the listen queue (to zero), i.e. --listen 0. I haven't tested this and don't know if zero is even considered a valid value. This is something that is normally increased as a site gains traffic, not decreased.

Michael Hampton
  • 244,070
  • 43
  • 506
  • 972
  • With `listen = 0` all that happens is that the output of the queue being full does not show anymore. It still seems that "somewhere" it is hanging until the threads are freed. I have attached my testing client in the question. Thank you! – Constantin Jun 15 '21 at 11:42
  • 1
    @Constantin It might be that this is just not possible. I could not find anybody else who even tried to do this. – Michael Hampton Jun 15 '21 at 11:53
  • Thanks for the feedback. This is actually weird - I am wondering why shouldn't a web server have the ability to be limited on connections amount. Especially when these applications are put behind a whole infrastructure meant to work with load balancers, auto scaling groups and calculated pre-allocated resources – Constantin Jun 15 '21 at 11:56