-1

Context of a client sending a file to a server that has been chunked into parts.

In a single server setup, an option is to store the the chunks on the server. When new chunks arrive they get added to the existing data.

With multiple servers "micro services", a request may be sent to server A , and then to server B. So when B goes to work with a new chunk it fails to retrieve the old chunks from A.

What are best practices for handling this pattern? What I have so far is

A) Route requests regarding file A to the same server B) OR, Store the chunks of the file on a shared service

The problem with A) is that it starts to defeat some of the advantages of having multiple servers.

Problem with B) is that it requires a lot more file transfer back and forth.

Is there a canonical / standard way to handle this?

oooiiiii
  • 302
  • 2
  • 11

1 Answers1

0

Usually you don't keep your files on the same server where you micro-service is deployed. In a Cloud environment for this kind of purpose you would use a dedicated service offered by the Cloud provider. For example for AWS and Azure that would be:

Regardless if you have your file splitted in chunks or not these services will have all the options to deal with this as the storage will be in one place. How this is scaled and handled internally is not something you should worry about as the cloud provider will take care of it. That's one of the reasons they provide it as a service.

This is a common way of storing any kind of files. Of course there are exceptions to this like configuration files but usually you would use these services.

xargs
  • 2,741
  • 2
  • 21
  • 33