From what I understand of chunked file uploads, chunks are stored in-memory so that the upload can be resumed from that point in case a failure occurs. However, I assume that in a multi-node environment this makes it necessary to use a "sticky session" so that the same client is always redirected to the same node (the one containing the chunks in memory). However, apart from this we have no need to use sticky sessions anywhere else, so we'd prefer not to.
Is there any way (using, e.g., Hazelcast or any other in-memory data grid) to distribute the chunks through the nodes of a cluster so that the upload can later be resumed even if the client is connected to a different node? In case that matters, we're using Spring Boot (latest).