In the aws-sdk's S3
class, what is the difference between upload()
and putObject()
? They seem to do the same thing. Why might I prefer one over the other?

- 34,206
- 35
- 106
- 163
4 Answers
The advantage to using AWS SDK upload()
over putObject()
is as below:
- If the reported MD5 upon upload completion does not match, it retries.
- If the file size is large enough, it uses multipart upload to upload parts in parallel.
- Retry based on the client's retry settings.
- You can use for Progress reporting.
- Sets the ContentType based on file extension if you do not provide it.

- 2,712
- 5
- 21
- 43

- 14,512
- 6
- 35
- 54
-
27So why might someone prefer `putObject()` over `upload()`? – callum Jul 18 '16 at 17:52
-
3putObject is mostly used in policy for AWS S3. – Piyush Patil Jul 18 '16 at 18:00
-
Your first bullet point is wrong according to the docs: "Uploads an arbitrarily sized buffer, blob, or stream" – callum Jul 18 '16 at 19:38
-
The docs for the [upload()](http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#upload-property) function. – callum Jul 18 '16 at 20:06
-
yes I second what @error2007s says, which are you referring to in the bullet list. – Philip Kirkbride Dec 15 '16 at 12:55
-
3Can you please elaborate clearly? Are those points about `upload()` or about `putObject()`. When you say 'You can use for Progress reporting.' - Use which function exactly? `upload()` or `putObject()`? – huskygrad Oct 21 '20 at 23:43
-
@Sanket `upload()` – NubPro Nov 17 '20 at 10:26
-
any suggestion what's the difference between `s3.write` vs `s3.upload` and what should I use – Rahul Ahire Feb 14 '21 at 16:06
upload()
allows you to control how your object is uploaded. For example you can define concurrency and part size.
From their docs: Uploads an arbitrarily sized buffer, blob, or stream, using intelligent concurrent handling of parts if the payload is large enough.
One specific benefit I've discovered is that upload()
will accept a stream without a content length defined whereas putObject()
does not.
This was useful as I had an API endpoint that allowed users to upload a file. The framework delivered the file to my controller in the form of a readable stream without a content length. Instead of having to measure the file size, all I had to do was pass it straight through to the upload()
call.

- 10,002
- 6
- 30
- 22
-
would it be safe to assume, when in doubt prefer s3 sdk's upload over putObject method then? – Dev1ce Nov 11 '19 at 10:30
-
1@AniruddhaRaje I think it depends on the use case. By default I'd personally use putObject because it follows the default convention *Object (getObject, headObject etc). – rlay3 Nov 20 '19 at 23:32
When looking for the same information, I came across: https://aws.amazon.com/blogs/developer/uploading-files-to-amazon-s3/
This source is a little dated (referencing instead upload_file()
and put()
-- or maybe it is the Ruby SDK?), but it looks like the putObject()
is intended for smaller objects than the upload()
.
It recommends upload()
and specifies why:
This is the recommended method of using the SDK to upload files to a bucket. Using this approach has the following benefits:
- Manages multipart uploads for objects larger than 15MB.
- Correctly opens files in binary mode to avoid encoding issues.
- Uses multiple threads for uploading parts of large objects in parallel.
Then covers the putObject()
operation:
For smaller objects, you may choose to use
#put
instead.
EDIT: I was having problems with the .abort()
operation on my .upload()
and found this helpful: abort/stop amazon aws s3 upload, aws sdk javascript
Now my various other events from https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/Request.html are firing as well! With .upload()
I only had 'httpUploadProgress'.

- 235
- 4
- 11
This question was asked almost six years ago and I stumbled across it while searching for information on the latest AWS Node.js SDK (V3). While V2 of the SDK supports the "upload" and "putObject" functions, the V3 SDK only supports "Put Object" functionality as "PutObjectCommand". The ability to upload in parts is supported as "UploadPartCommand" and "UploadPartCopyCommand" but the standalone "upload" function available in V2 is not and there is no "UploadCommand" function.
So if you migrate to the V3 SDK, you will need to migrate to Put Object. Get Object is also different in V3. A Buffer is no longer returned and instead a readable stream or a Blob. So if you got the data through "Body.toString()" you now have to implement a stream reader or handle Blob's.
EDIT: the upload command can be found in the AWS Node.js SDK (V3) under @aws-sdk/lib-storage. here is a direct link: https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/modules/_aws_sdk_lib_storage.html

- 1
- 1

- 459
- 5
- 7