0

I have an API method that when called and passed a string of file keys, converts that to an array, and downloads them from S3. However when I call this API, I get the error Error: ENOENT: no such file or directory, open <filename here> on the server.

This is my api:

reports.get('/xxx/:fileName', async (req, res) => {

  var AWS = require('aws-sdk');

  var s3 = new AWS.S3();

  var str_array = req.params.fileName.split(','); 

  for (var i = 0; i < str_array.length; i++) {
    var filename = str_array[i].trim();
    localFileName = './' + filename;

    let file = fs.createWriteStream(localFileName);   

    s3.getObject({
      Bucket: config.reportBucket,
      Key: filename
    })
    .on('error', function (err) {
      res.end("File download failed with error " + err.message);            
    })
    .on('httpData', function (chunk) {
      file.write(chunk);
    })
    .on('httpDone', function () {
      file.end();
    })
    .send();
  }
  res.end("Files have been downloaded successfully")
});

How can I successfully download multiple files from S3 by passing my array of keys?

Lee
  • 181
  • 1
  • 3
  • 22
arcade16
  • 1,535
  • 4
  • 23
  • 45
  • try writing to /tmp/ + filename and see if the error msg resolves – Robert Rowntree Jan 29 '19 at 01:27
  • 1
    maybe related to this: https://stackoverflow.com/questions/12906694/fs-createwritestream-does-not-immediately-create-file need to open the file first. – hotfire42 Jan 29 '19 at 01:30
  • Same issue when writing to /tmp/. There is no folder named tmp in my project and creating one does not solve the problem. I cannot open the file first either as the files are being downloaded and do not exist on disk. – arcade16 Jan 29 '19 at 01:42

2 Answers2

0

You need to be little more detailed with the requirements. I mean what is the use case like is there any common prefix of all S3 files that you want to download, is there any limitation on size etc. Below code uses java aws-s3-sdk 1.11.136. I am downloading all objects present at the location. Please note that this is making multiple calls (not suggested).

  public void downloadMultipleS3Objects(String[] objectPaths){
       Arrays.stream(objectPaths)
            .filter(objectPath ->   amazonS3.doesObjectExist("BUCKET_NAME",objectPath))
            .forEach(objectPath -> amazonS3.getObject("BUCKET_NAME", objectPath).getObjectContent())})};// todo you may write this stream to a file 

Note : In case you have a common prefix for all objects, there are other APIs to achieve that in a better way.

Vipin Sharma
  • 107
  • 1
  • 8
  • Ultimately, I need to be able to pick and choose files from a folder within a bucket. No limitation on size, no predictable commonality between files with regards to prefix outside of the bucket being a constant. – arcade16 Jan 29 '19 at 06:03
  • The code I mentioned should work. It is just checking if files exist on S3 and later fetching stream from S3. Not really sure about the javascript SDK if that is what you are looking. – Vipin Sharma Jan 29 '19 at 06:09
0

The issue was my AWS file keys were messing with the path for the files. AWS keys follow folder hierarchy, so files placed in a folder will have a key of folderName/fileName. I stripped the folder name off by doing this:

localFileName = './temp/' + filename.substring(filename.indexOf("/") + 1);

Also, I had to do the following to have new files be created on disk as they are being downloaded:

file = fs.createWriteStream(localFileName, {flags: 'a', encoding: 'utf-8',mode: 0666});
file.on('error', function(e) { console.error(e); });
arcade16
  • 1,535
  • 4
  • 23
  • 45