3

The situation:

I have a Node.JS server. The NGINX is set to limit the size of request to be 5MB.

I need to upload large files (~15MB) from the client running in a browser to the server.

There are 3 instances of the server running WITHOUT shared memory/file systems.

What I have done:

I used some libraries to break down the files into chunks (< 5MB), sending them to the server. After successfully sending the last chunk to the server, the client called to a server endpoint to signal the completion and then the merging of chunks happened. This worked when I had one instance of the server running. Due to the load balancing, etc., each request of sending a chunk might be handled by a different instance of the server. Therefore, the chunks may not be merged correctly.

Solutions I have thought of:

The ultimate solution (in my opinion) would be how to stream the chunks to the server in just one request, which is handled by just one server instance.

The Stream API is still experimental. I would like to try but have not found a good example to follow. I heard that the Stream API on client side was different than the stream on Node.JS and some more things needed to be done.

I did some research on Transfer-Encoding: chunked of the HTTP header. Someone said it was good to send large files but I haven't found a good working example how to achieve this.

I also thought of WebSocket (or even Socket.io) to establish a connection with a server instance and send the chunks over. However, some reading told me that WebSocket was not a good way to send large files.

The question:

How can I achieve the goal of sending large files to one server instance in the most efficient way?

Huy Doan
  • 81
  • 1
  • 2
  • 8

0 Answers0