11

Following this AWS documentation, I was able to create a new endpoint on my API Gateway that is able to manipulate files on an S3 repository. The problem I'm having is the file size (AWS having a payload limitation of 10MB).

I was wondering, without using a lambda work-around (this link would help with that), would it be possible to upload and get files bigger than 10MB (even as binary if needed) seeing as this is using an S3 service as a proxy - or is the limit regardless?

I've tried PUTting and GETting files bigger than 10MB, and each response is a typical "message": "Timeout waiting for endpoint response".

Looks like Lambda is the only way, just wondering if anyone else got around this, using S3 as a proxy.

Thanks

Community
  • 1
  • 1
Hexie
  • 3,955
  • 6
  • 32
  • 55

2 Answers2

10

You can create a Lambda proxy function that will return a redirect link with a S3 pre-signed URL.

Example JavaScript code that generating a pre-signed S3 URL:

var s3Params = {
    Bucket: test-bucket,
    Key: file_name,
    ContentType: 'application/octet-stream',
    Expires: 10000
};

s3.getSignedUrl('putObject', s3Params, function(err, data){
   ...
}

Then your Lambda function returns a redirect response to your client, like,

{
    "statusCode": 302,
    "headers": { "Location": "url" }
}

You might be able to find more information you need from this documentation.

Ka Hou Ieong
  • 5,835
  • 3
  • 20
  • 21
  • Although not ideal (I was hoping to use API as an Amazon S3 Proxy to achieve this) it seems there is no getting around that limitation. This answer led me to what I think I will be using - Presigned URL with AWS (http://docs.aws.amazon.com/AmazonS3/latest/dev/UploadObjectPreSignedURLDotNetSDK.html), thank you. – Hexie Mar 24 '17 at 02:36
  • 4
    But with the redirect, don't you have the problem that the initial request's payload is still too big? I mean the PUT would still go through the API gateway, which would lead you to the Lambda which would redirect you to a direct S3 upload. But it will fail, because the payload is too big at the API Gateway, or am I misunderstanding something? I have actually implemented it, and it works nicely with the redirect for small files, but with large files, the problem remains (it doesn't even get through my API gateway...) – konse Jul 22 '18 at 17:49
  • @konse the client would first have to issue a GET request to retrieve the URL which is also not ideal. – Luke Becker Aug 02 '21 at 13:30
3

If you have large files, consider directly uploading them to S3 from your client. You can create a API endpoint to return a signed URL for the client to use for the upload (To Implement Access Control) your private content.

Also you can consider using multi-part uploads for even larger files to speed up the uploading.

Ashan
  • 18,898
  • 4
  • 47
  • 67
  • Thanks for the suggestion but wouldn't you need a Lambda to run what you are suggesting? (returning a signed url)? The URL I posted in the question already shows that and my query was to do it without lambda. – Hexie Mar 23 '17 at 20:42
  • Yes. You need a compute service for this. There are few other approachs. If you use AWS cognito for authentication or having an end point to call Amazon STS from backend to generate a temporary access token (this one is more similar to sign urls) you can access private resources in S3 from your browser client. – Ashan Mar 25 '17 at 00:51