I've been using wsgiref to create an HTTP server but notice that there is no timeout limit / multithreading, so one client can infinitely hold up the server. Here is an example application for demonstration:
from wsgiref.simple_server import make_server
def server(environ, start_response):
start_response('200 OK', [("Content-type", "text/plain")])
return [("Wikipedia is an online wiki").encode("utf-8")]
with make_server('127.0.0.1', 8080, server) as httpd:
print(f"Website serving on port 8080")
httpd.serve_forever()
Now, if you were to go http://localhost:8080, you would see the text "Wikipedia is an online wiki."
However, if you were to initiate a socket to localhost:8080 and never send any information to that socket, the website will hold up forever and any attempt to retrieve its contents will infinitely hold until the client terminates its socket. I tried this with both raw Unix sockets using C and using the Linux socket
command: socket -- 127.0.0.1 8080
which both held up the server. When I terminated the connection, everything resumed as normal.
How would I make a wsgiref server which
- (optional) spawns a new fork when connected to
- implements a timeout, say 5 seconds, which it will then shutdown the socket?
It does not seem like I can do this anywhere in the server()
function, so it'd likely have to be when I create the server. If there was some way to switch from using a recvfrom()
call (what it currently uses when a connection is initiated) and first using a poll()
call with a timeout attached, and then a recvfrom()
would be a solution, although I do not know how to do this from within Python.