I have a web-based resource that can handle concurrent requests. I would like to make requests to this resource asynchronously and store the sum of returned results into a list. This is easy to explain with pseudo-code, but difficult to implement in python (for me).
for request in requests:
perform_async_request_of_resource(request, result_list, timeout)
wait_until_all_requests_return_or_timeout()
process_results()
I like this pattern because I am able to make the requests concurrent. These requests are I/O bound, and I believe that this pattern will permit me to utilize my CPU resources more efficiently.
I believe that I have a few problems I need to solve.
1) I need to figure out what library to use in order to make asynchronous concurrent requests in a for-loop
2) I need to use some synchronization to protect the result_list on write
3) this must be possible with timeouts
A pattern I have seen used before is to use spawn asynchronous threads and have each thread in turn create its own asynchronous thread to handle the request. On timeout, the parent thread aborts the child thread. However, I do not like this because I then have to hold 2x the number of thread execution contexts in memory.
There are various pypi packages I have considered such as subprocess and asyncio, but I cannot determine what the best solution is for this use-case.