1

Didn't want to store uploaded files on-disk or in-memory as an intermediate step, so decided to use busboy to handle file uploads to AWS S3. Here's my code:

function handleUpload(req, res, bucket, key) {
    let bb = new Busboy({ headers: req.headers, limits: { fileSize: 10  * 1024 * 1024 , files: 1 } });
    const uploads = [];
    bb.on('file', (fieldname, stream, filename, encoding, mimeType) => {
        console.log(`Uploaded fieldname: ${fieldname}; filename: ${filename}, mimeType: ${mimeType}`);
        const params = { Bucket: bucket, Key: key, Body: stream, ContentType: mimeType };
        uploads.push({ filename, result: S3.svc.upload(params).promise().then(data => data).catch(err => err) });
    });
    bb.on('finish', async () => {
        const results = await Promise.all(uploads.map(async (upload) => ({ ...upload, result: await upload.result })));
        // handle success/failure
    });
    req.pipe(bb);
}

This works well, but the problem is that busboy enforces size limits not by throwing an error, but simply by truncating the uploaded file silently.

How can I short-circuit/abort an upload if the file is too big & return an error in my API?

As per my original intention, I would like to avoid storing on-disk or in-memory if possible...

markvgti
  • 4,321
  • 7
  • 40
  • 62

1 Answers1

0

According to the documentation, in busboy special events

If a configured file size limit was reached, stream will both have a boolean property truncated (best checked at the end of the stream) and emit a 'limit' event to notify you when this happens.

So you could use in your case

stream.on('limit', function(data) {                                               
 console.log("LIMIT"); });
83n017
  • 1
  • 1