In actix-web, it is possible to serve a file by returning in a handler:
HttpResponse::Ok().streaming(file)
But here, file
must implement the Stream<Item = Result<Bytes, E>>
trait. The File
type from the crate async_std does not implement it, so I created a wrapper that implements it:
struct FileStreamer {
file: File,
}
impl Stream for FileStreamer {
type Item = Result<Bytes, std::io::Error>;
fn poll_next(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Option<Self::Item>> {
let mut buf = [0; 1024];
self.file.read(&mut buf).poll_unpin(cx).map(|r| {
r.map(|n| {
if n == 0 {
None
} else {
Some(Bytes::copy_from_slice(&buf[0..n]))
}
})
.transpose()
})
}
}
It works but there is a problem. For every call to read we create a new instance of Bytes
, which is a dynamically allocated buffer.
Is this the most efficient way to serve a file in actix-web?
It also feels to me, choosing the right buffer size in that case is actually more critical, as a small buffer will cause repetitive syscalls, and a too large buffer will cause slow memory allocation, that wont even be used entirely.
Am I right to consider recurring dynamic allocation as a performance issue?
PS: The file in question is not static, it is subject to modifications and deletion, for this reason, controlling the reading process is necessary.