Does anyone know if there is a limit to the number of objects I can put in an S3 bucket? can I put a million, 10 million etc.. all in a single bucket?
-
1Why not drop a million, or 10 million in and find out? – PurplePilot Oct 20 '10 at 18:25
-
3010,000 requests for $.01 could get expensive to find the outter limits. Thanks for the quote below – Quotient Oct 20 '10 at 20:28
-
1Its 20,000 for $0.01 now – Petah Feb 10 '15 at 07:46
7 Answers
According to Amazon:
Write, read, and delete objects containing from 0 bytes to 5 terabytes of data each. The number of objects you can store is unlimited.
Source: http://aws.amazon.com/s3/details/ as of Sep 3, 2015.

- 4,568
- 29
- 40

- 6,652
- 7
- 36
- 42
-
5Note: 5GB is max for each PUT. If you want to upload a 5TB object, you'll need to turn on [multipart upload](https://docs.aws.amazon.com/AmazonS3/latest/dev/qfacts.html). – whiteshooz Jul 05 '18 at 20:18
-
While 5 TB is the maximum file size, you can also store objects with a **size of 0 bytes**. Source: [Q: How much data can I store in Amazon S3?](https://aws.amazon.com/s3/faqs/) – Norbert Jun 19 '20 at 06:47
-
It looks like the limit has changed. You can store 5TB for a single object.
The total volume of data and number of objects you can store are unlimited. Individual Amazon S3 objects can range in size from a minimum of 0 bytes to a maximum of 5 terabytes. The largest object that can be uploaded in a single PUT is 5 gigabytes. For objects larger than 100 megabytes, customers should consider using the Multipart Upload capability.

- 2,321
- 3
- 27
- 40

- 1,927
- 2
- 19
- 32
- There is no limit on objects per bucket.
- There is a limit of 100 buckets per account (you need to request amazon if you need more).
- There is no performance drop even if you store millions of objects in a single bucket.
From docs,
There is no limit to the number of objects that can be stored in a bucket and no difference in performance whether you use many buckets or just a few. You can store all of your objects in a single bucket, or you can organize them across several buckets.
as of Aug 2016

- 541
- 4
- 12
-
3The organization/key prefix of objects in the bucket can make a difference when you're working with millions of objects. See https://aws.amazon.com/blogs/aws/amazon-s3-performance-tips-tricks-seattle-hiring-event/ – Trenton Aug 27 '18 at 16:50
-
https://docs.aws.amazon.com/AmazonS3/latest/dev/optimizing-performance.html says "You no longer have to randomize prefix naming for performance." But it's unclear from the documentation how S3 does indexing (hashing? b-trees?) and whether it can efficiently list objects matching a prefix. The following outdated documentation offers some hints: https://aws.amazon.com/blogs/aws/amazon-s3-performance-tips-tricks-seattle-hiring-event/ – Don Smith Jun 11 '19 at 18:52
While you can store an unlimited number of files/objects in a single bucket, when you go to list a "directory" in a bucket, it will only give you the first 1000 files/objects in that bucket by default. To access all the files in a large "directory" like this, you need to make multiple calls to their API.

- 151
- 1
- 7
There are no limits to the number of objects you can store in your S3 bucket. AWS claims it to have unlimited storage. However, there are some limitations -
- By default, customers can provision up to 100 buckets per AWS account. However, you can increase your Amazon S3 bucket limit by visiting AWS Service Limits.
- An object can be 0 bytes to 5TB.
- The largest object that can be uploaded in a single PUT is 5 gigabytes
- For objects larger than 100 megabytes, customers should consider using the Multipart Upload capability.
That being said if you really have a lot of objects to be stored in S3 bucket consider randomizing your object name prefix to improve performance.
When your workload is a mix of request types, introduce some randomness to key names by adding a hash string as a prefix to the key name. By introducing randomness to your key names the I/O load will be distributed across multiple index partitions. For example, you can compute an MD5 hash of the character sequence that you plan to assign as the key and add 3 or 4 characters from the hash as a prefix to the key name.
More details - https://aws.amazon.com/premiumsupport/knowledge-center/s3-bucket-performance-improve/
-- As of June 2018

- 66,731
- 38
- 279
- 289
"You can store as many objects as you want within a bucket, and write, read, and delete objects in your bucket. Objects can be up to 5 terabytes in size."
from http://aws.amazon.com/s3/details/ (as of Mar 4th 2015)

- 1,186
- 1
- 13
- 24
@Acyra- performance of object delivery from a single bucket would depend greatly on the names of the objects in it.
If the file names were distanced by random characters then their physical locations would be spread further on the AWS hardware, but if you named everything 'common-x.jpg', 'common-y.jpg' then those objects will be stored together.
This may slow delivery of the files if you request them simultaneously but not by enough to worry you, the greater risk is from data-loss or an outage, since these objects are stored together they will be lost or unavailable together.

- 1
- 1
-
Do you have any reference for this, or is it an educated guess? I could guess that S3 objects are sharded/hashed by filename, or I could guess that something more randomising like a sha1/md5 or something is used... but without source material I don't actually _know_. – fazy Sep 06 '16 at 16:25