TL;DR:
I have issues getting my head around Actix Multipart when iterating over "data chunks" and saving them to a single file; all while not messing up Rusts error handling, efficient memory management and async processing.
Details and Background:
I know a bit of C++ and the basics REST API theory but have never implemented web services before. Furthermore, I am a complete newbie to Rust and want to create a simple file server using Actix as my first Rust project. This file server will run in simple container in Kubernetes where instances of this container can be added and removed at any time. Files are stored in a single directory which is shared between all container instances via a mounted volume. Each instance should use as little memory as possible. The goal is to provide...
- A simple HTTP GET API endpoint which focuses on maximum speed for single file downloading.
- A simple HTTP PUT API endpoint which focuses on maximum robustness and safety for single file uploading.
There are a few twists like file optional file compression using zstd, hashing using xxhash128, Write-Ahead Logging or WAL (like in SQLite), and so on which will be removed from the code snippets for simplicity reasons.
I am also open for further suggestions for improvements that go beyond the Acitx Multipart issue.
HTTP GET: I am not happy with it but it works.
#[get("/file/{file_id}")]
pub async fn get_file(file_id: web::Path<String>, data_path: web::Data<Config>) -> impl Responder {
let mut file_path = data_path.data_path.clone();
file_path.push('/');
file_path.push_str(&file_id);
if let Ok(mut file) = File::open(file_path) {
let mut contents = Vec::new();
if let Err(_) = file.read_to_end(&mut contents) {
return HttpResponse::InternalServerError().finish();
}
HttpResponse::Ok().body(contents)
} else {
HttpResponse::NotFound().finish()
}
}
}
HTTP PUT: Everything within the while loop is absolute trash. This is where I need your help.
#[put("/file/{file_id}")]
pub async fn put_file(
data_path: web::Data<Config>, mut payload: Multipart, request: HttpRequest) -> impl Responder {
// 10 MB
const MAX_FILE_SIZE: u64 = 1024 * 1024 * 10;
const MAX_FILE_COUNT: i32 = 1;
// detect malformed requests
let content_length: u64 = match request.headers().get("content-length") {
Some(header_value) => header_value.to_str().unwrap_or("0").parse().unwrap_or(0),
None => 0,
};
// reject malformed requests
match content_length {
0 => return HttpResponse::BadRequest().finish(),
length if length > MAX_FILE_SIZE => {
return HttpResponse::BadRequest()
.body(format!("The uploaded file is too large. Maximum size is {} bytes.", MAX_FILE_SIZE));
},
_ => {}
};
let file_path = data_path.data_path.clone();
let mut file_count = 0;
while let Some(mut field) = payload.try_next().await.unwrap_or(None) {
if let Some(filename) = field.content_disposition().get_filename() {
if file_count == MAX_FILE_COUNT {
return HttpResponse::BadRequest().body(format!(
"Too many files uploaded. Maximum count is {}.", MAX_FILE_COUNT
));
}
let file_path = format!("{}{}-{}", file_path, "1", sanitize_filename::sanitize(&filename));
let mut file: File = File::create(&file_path).unwrap();
while let Some(chunk) = field.try_next().await.unwrap_or(None) {
file.write_all(&chunk).map_err(|e| {
HttpResponse::InternalServerError().body(format!(
"Failed to write to file: {}", e
))
});
}
file.flush().map_err(|e| {
HttpResponse::InternalServerError().body(format!(
"Failed to flush file: {}", e
))
});
file_count += 1;
}
}
if file_count != 1 {
return HttpResponse::BadRequest().body("Exactly one file must be uploaded.");
}
HttpResponse::Ok().finish()
}