My API server has a very limited disk space(500MB) and memory(1GB). One of the API calls that it gets, is to receive a file. The consumer calls API and passes URL to be downloaded.
The "goal" of my server is to upload this file to Amazon S3. Unfortunately, I can't ask the consumer to upload the file directly to S3(part of requirements).
The problem is, sometimes those are huge files(10GB) and saving them to disk and then uploading it to S3 is not an option(500MB disk space limit).
My question is, how can I "pipe" the file from the input url to S3 using curl Linux program ?
Note: I was able to pipe it in different ways but, either it first tries to download the whole file and fails or I hit memory error and curl quits. My guess is that the download is much faster than the upload, so the pipe buffer/memory grows and explodes(1GB memory on server) when I get 10GB files.
Is there a way to achieve what I'm trying to do using curl and piping ?
Thank you, - Jack