54

I know of limiting the upload size of an object using this method: http://doc.s3.amazonaws.com/proposals/post.html#Limiting_Uploaded_Content

But i would like to know how it can be done while generating a pre-signed url using S3 SDK on the server side as an IAM user.

This Url from SDK has no such option in its parameters : http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#putObject-property

Neither in this: http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#getSignedUrl-property

Please note: I already know of this answer: AWS S3 Pre-signed URL content-length and it is NOT what i am looking for.

Community
  • 1
  • 1
Koder
  • 1,794
  • 3
  • 22
  • 41
  • 1
    No, I ended up using S3 Policies for HTTP POST instead. - Link:http://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-HTTPPOSTConstructPolicy.html – Koder Jul 12 '15 at 15:16
  • @Koder did you end up using combination of pre-signed URL + HTTP Post policy? If yes, could you post that as an answer? Will be helpful for me! – Niks Jul 22 '15 at 07:59
  • 1
    @NikhilPatil - Not sure what you mean by combination. During upload process, i return http post policy to the browser. browser uploads the file using that policy. When linking the file for the end user to use, i generate a pre-signed URL since the file must be protected from anonymous use (i have url timeout configured when generating). But i dont use pre-signed url during the upload process. – Koder Jul 24 '15 at 12:26
  • @Koder Yup, you answered my question :) I was trying to use pre-signed url in browser upload. And to limit the size wanted to specify a policy, wasn't possible. Even I have concluded that what you ended up doing is the best possible way. Thanks! This was helpful – Niks Jul 24 '15 at 13:07
  • I am surprised, AWS has no arrangement to limit max upload size with presigned URLs – Vinit Khandelwal Jun 03 '21 at 07:02

4 Answers4

26

The V4 signing protocol offers the option to include arbitrary headers in the signature. See: http://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-query-string-auth.html So, if you know the exact Content-Length in advance, you can include that in the signed URL. Based on some experiments with CURL, S3 will truncate the file if you send more than specified in the Content-Length header. Here is an example V4 signature with multiple headers in the signature http://docs.aws.amazon.com/general/latest/gr/sigv4-add-signature-to-request.html

user1055568
  • 1,349
  • 13
  • 21
  • 9
    What happens if you send _less_ than specified in the header, though? Is the request accepted, and if so, is the content-length updated on S3? – julealgon Jan 29 '21 at 21:12
  • 1
    @julealgon It will return a 400 Bad Request. The ContentLength is typically determined as the request is sent, so if you manually specify a ContentLength larger than the actual file size, S3 views it as a malformed request. – Janac Meena May 22 '21 at 15:26
6

For any other wanderers that end up on this thread - if you set the Content-Length attribute when sending the request from your client, there a few possibilities:

  1. The Content-Length is calculated automatically, and S3 will store up to 5GB per file

  2. The Content-Length is manually set by your client, which means one of these three scenarios will occur:

  • The Content-Length matches your actual file size and S3 stores it.
  • The Content-Length is less than your actual file size, so S3 will truncate your file to fit it.
  • The Content-Length is larger than your actual file size, and you will receive a 400 Bad Request

In any case, a malicious user can override your client and manually send a HTTP request with whatever headers they want, including a much larger Content-Length than you may be expecting. Signed URLs do not protect against this! The only way is to setup an POST policy. Official docs here: https://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-HTTPPOSTConstructPolicy.html

More details here: https://janac.medium.com/sending-files-directly-from-client-to-amazon-s3-signed-urls-4bf2cb81ddc3?postPublishedType=initial

Alternatively, you can have a Lambda that automatically deletes files that are larger than expected.

Janac Meena
  • 3,203
  • 35
  • 32
  • Why do you say signed URL's do not protect against changing header fields that are included in the signature? – user1055568 Jun 08 '21 at 20:10
  • @user1055568 because even if you specify a header as a parameter while creating your signed URL, it will still accept requests that do not have the matching header. You can see a detailed explanation [here](https://towardsaws.com/sending-files-directly-from-client-to-amazon-s3-signed-urls-4bf2cb81ddc3). – Janac Meena Jun 08 '21 at 22:46
  • I believe you are incorrect. The V4 signing protocol allows inclusion of ContentLength in the signature. Perhaps some libraries do not support this, which is good to alert people to. – user1055568 Jun 09 '21 at 18:52
  • 1
    @user1055568 yes you can include ContentLength as a parameter, I don't disagree with you about that. I'm speaking specifically about *enforcing* the ContentLength. So even though during the creation of a presigned S3 URL, you can include ContentLength, and it will return the specified ContentLength in the map of headers, there's nothing stopping a client from overriding the ContentLength header and sending it to your presigned URL, which will process the request with the overridden ContentLength. I would recommend that you try this using Postman in order to convince yourself. Good Luck! :) – Janac Meena Jun 10 '21 at 02:14
  • 2
    The tool you are using to create the signature must be broken, as it is not including the ContentLength in the signature. Recommend you review Amazon documentation on how to include headers in signature. – user1055568 Jun 19 '21 at 16:04
4

You may not be able to limit content upload size ex-ante, especially considering POST and Multi-Part uploads. You could use AWS Lambda to create an ex-post solution. You can setup a Lambda function to receive notifications from the S3 bucket, have the function check the object size and have the function delete the object or do some other action.

Here's some documentation on Handling Amazon S3 Events Using the AWS Lambda.

adamkonrad
  • 6,794
  • 1
  • 34
  • 41
  • 13
    For anyone else who didn't understand: "The term ex-ante (sometimes written ex ante or exante) is a phrase meaning "before the event"". – Janac Meena May 21 '21 at 15:11
  • 1
    Lambda triggering is not needed if you are OK with what S3 lifecycle rules offer, you can target any files that are bigger then some number of bytes and automatically delete it. You can filter by prefix of key too to avoid selecting all objects in all bucket which are more then set size of bytes. – Lukas Liesis Jan 11 '23 at 22:14
0

You can specify the min and max sizes in bytes using a condition called content-length-range:

{
  "expiration": "2022-02-14T13:08:46.864Z",
  "conditions": [
    { "acl": "bucket-owner-full-control" },
    { "bucket": "my-bucket" },
    ["starts-with", "$key", "stuff/clientId"],
    ["content-length-range", 1048576, 10485760]
  ]
}
Oscar Chen
  • 559
  • 5
  • 11