I cannot understand the reason aiohttp (and asyncio in general) server implementation does not provide a way to limit max concurrent connections limit (number of accepted sockets, or number of running requests handlers). (https://github.com/aio-libs/aiohttp/issues/675). Without this limit, it is easy to run out of memory and/or file descriptors.
In the same time, aiohttp client by default limits number of concurrent requests to 100 (https://docs.aiohttp.org/en/stable/client_advanced.html#limiting-connection-pool-size), aiojobs limits number of running tasks and size of pending tasks list, nginx has worker_connections limit, any sync framework is limited by number of worker threads by design.
While aiohttp can handle a lot of concurrent requests, this number is still limited. Docs on aiojobs says "The Scheduler has implied limit for amount of concurrent jobs (100 by default). ... It prevents a program over-flooding by running a billion of jobs at the same time". And still, we can happily spawn "billion" (well, until we run out of resources) aiohttp handlers.
So the question is, why is it implemented the way it is? Am I missing some important detail? I think we can somehow pause requests handlers using Semafor, but the socket is still accepted by aiohttp and coroutine is spawned, in contrast with nginx. Also when deploying behind nginx, the number of worker_connections and aiohttp desired limit will certainly be different.(because nginx may serve static files also)