10

Is there a way to do a multipart upload via the browser using a generated presigned URL?

premunk
  • 303
  • 1
  • 4
  • 18

3 Answers3

6

Angular - Multipart Aws Pre-signed URL

Example

https://multipart-aws-presigned.stackblitz.io/

https://stackblitz.com/edit/multipart-aws-presigned?file=src/app/app.component.html

Download Backend: https://www.dropbox.com/s/9tm8w3ujaqbo017/serverless-multipart-aws-presigned.tar.gz?dl=0

To upload large files into an S3 bucket using pre-signed url it is necessary to use multipart upload, basically splitting the file into many parts which allows parallel upload.

Here we will leave a basic example of the backend and frontend.

Backend (Serveless Typescript)

const AWSData = {
  accessKeyId: 'Access Key',
  secretAccessKey: 'Secret Access Key'
};

There are 3 endpoints

Endpoint 1: /start-upload

Ask S3 to start the multipart upload, the answer is an UploadId associated to each part that will be uploaded.

export const start: APIGatewayProxyHandler = async (event, _context) => {
  const params = {
    Bucket: event.queryStringParameters.bucket, /* Bucket name */
    Key: event.queryStringParameters.fileName /* File name */
  };

  const s3 = new AWS.S3(AWSData);

  const res = await s3.createMultipartUpload(params).promise()

  return {
    statusCode: 200,
    headers: {
      'Access-Control-Allow-Origin': '*',
      'Access-Control-Allow-Credentials': true,
    },
    body: JSON.stringify({
      data: {
        uploadId: res.UploadId
      }
    })
  };
}
Endpoint 2: /get-upload-url

Create a pre-signed URL for each part that was split for the file to be uploaded.

export const uploadUrl: APIGatewayProxyHandler = async (event, _context) => {
  let params = {
    Bucket: event.queryStringParameters.bucket, /* Bucket name */
    Key: event.queryStringParameters.fileName, /* File name */
    PartNumber: event.queryStringParameters.partNumber, /* Part to create pre-signed url */
    UploadId: event.queryStringParameters.uploadId /* UploadId from Endpoint 1 response */
  };

  const s3 = new AWS.S3(AWSData);

  const res = await s3.getSignedUrl('uploadPart', params)

  return {
    statusCode: 200,
    headers: {
      'Access-Control-Allow-Origin': '*',
      'Access-Control-Allow-Credentials': true,
    },
    body: JSON.stringify(res)
  };
}
Endpoint 3: /complete-upload

After uploading all the parts of the file it is necessary to inform that they have already been uploaded and this will make the object assemble correctly in S3.

export const completeUpload: APIGatewayProxyHandler = async (event, _context) => {
  // Parse the post body
  const bodyData = JSON.parse(event.body);

  const s3 = new AWS.S3(AWSData);

  const params: any = {
    Bucket: bodyData.bucket, /* Bucket name */
    Key: bodyData.fileName, /* File name */
    MultipartUpload: {
      Parts: bodyData.parts /* Parts uploaded */
    },
    UploadId: bodyData.uploadId /* UploadId from Endpoint 1 response */
  }

  const data = await s3.completeMultipartUpload(params).promise()

  return {
    statusCode: 200,
    headers: {
      'Access-Control-Allow-Origin': '*',
      'Access-Control-Allow-Credentials': true,
      // 'Access-Control-Allow-Methods': 'OPTIONS,POST',
      // 'Access-Control-Allow-Headers': 'Content-Type',
    },
    body: JSON.stringify(data)
  };
}

Frontend (Angular 9)

The file is divided into 10MB parts

Having the file, the multipart upload to Endpoint 1 is requested

With the UploadId you divide the file in several parts of 10MB and from each one you get the pre-signed url upload using the Endpoint 2

A PUT is made with the part converted to blob to the pre-signed url obtained in Endpoint 2

When you finish uploading each part you make a last request the Endpoint 3

In the example of all this the function uploadMultipartFile

  • 2
    Endpoint 2: /get-upload-url Can't we pass the number of parts from the client and generate pre-signed URLs for all parts. So that we don't end up making multiple API calls for each part. – Akanksha Chaturvedi Jun 02 '21 at 09:19
  • `await s3.completeMultipartUpload(params).promise()` This is a blocking request to AWS S3 that could take several minutes as per AWS S3 docs. Ref: https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-s3/classes/completemultipartuploadcommand.html In that case, does it not make sense to perform this action from the front-end itself? – rhetonik Dec 15 '22 at 10:38
4

I was managed to achieve this in serverless architecture by creating a Canonical Request for each part upload using Signature Version 4. You will find the document here AWS Multipart Upload Via Presign Url

2

from the AWS documentation:

For request signing, multipart upload is just a series of regular requests, you initiate multipart upload, send one or more requests to upload parts, and finally complete multipart upload. You sign each request individually, there is nothing special about signing multipart upload request

So I think you should have to generate a presigned url for each part of the multipart upload :(

what is your use case? can't you execute a script from your server, and give s3 access to this server?

Tom
  • 2,689
  • 16
  • 21
  • Should or shouldn't have generated the presigned URL? – premunk Aug 05 '15 at 21:49
  • My understanding is that you have to presign each part. I recommend to use the S3 sdk directly, rather than presigned urls for this. – Tom Aug 05 '15 at 22:13
  • Aha, the issue is with storing the client secret key etc as I am making the requests via the browser. Also, Is there any documentation evidence to your understanding? – premunk Aug 05 '15 at 22:26
  • if you don't want to store the client secret key, can't you create an ec2 machine with an IAM role attached? This would require to run from the server and not from the browser, maybe with some ajax call or whatever suits you best. As an evidence, I find the documentation pretty clear when it says "You sign each request individually, there is nothing special about signing multipart upload requests", no? – Tom Aug 05 '15 at 22:39
  • hmm... I need to look into that. Another workaround I found was to chunk the video as blobs and do a mulipart upload via the browser. That seems to work well, maybe better, If I get around the secret key storage. – premunk Aug 05 '15 at 22:51
  • Maybe you can also have a to IAM roles, to get temporary credentials. so that you only store a "limited" secret key, which will expire – Tom Aug 06 '15 at 06:49
  • @Tom Seems like there is some support (https://aws.amazon.com/blogs/developer/announcing-the-amazon-s3-managed-uploader-in-the-aws-sdk-for-javascript/) unless the managed uploader doesn't allow for presigned urls. – geoboy Jun 27 '17 at 23:25
  • From where to initiate multipart upload? If upload is being initiated from browser then is it required to have access and secret key? If upload is initiated using secret key then what is the purpose of signed url? – Ashwin Nov 29 '18 at 19:54
  • To initiate from the browser then you need either a public bucket (but don't do that), or you need aws credentials (but don't hardcode credentials). if you need aws credentials on client side, then you can use Cognito to get authentication (see Cognito Users Pool) and temporary AWS credentials (see Cognito Federated Identities), but this is a whole topic in itself. otherwise handle this on server side. imho however, using signed urls on client side makes no sense (if you have aws credentials, then why using signed urls?) – Tom Dec 05 '18 at 10:47
  • 1
    @Tom I think you meant to say creating signed urls on client side makes no sense, not "signed urls on client side makes no sense". I'm sure most realize that, but reading some of the other comments about hiding amazon secrets in the browser makes me wonder...! – user2677034 Aug 01 '20 at 03:48