I'm writing an Http Server using boost::asio. For large files, in order to avoid reading the whole file into memory and sending it to the network, I read it part by part which I send on the network using boost::asio::async_write.
The problem is that my producer (the function that reads from the files) is much faster than the consumer (boost::asio::async_write), which leads to a huge memory consumption for big files.
I want to avoid this problem by limiting the list of buffers. It seems like a simple producer/consumer problem, however, I don't want to block a thread while doing so.
I use boost::io_service with a thread pool of n threads which is configurable and in case we have too many requests on large files I don't want to end up with a server not serving any request anymore.
So my question is: - How can I design this mechanism without blocking a thread? - Should I test the list size and then in case it is already too large, spawn a deadline timer that will do a io_service::post and continue reading my file ? - Is there a better way to handle that ?