I am working on project where number of read/write requests are increases with increase in size of data. But as we are testing 50GB of data, we are making very high amount of read/write requests to s3 and s3 is throwing "please reduce your request rate" error. We cant choose option to reduce requests, so is there any possible way to use s3 more smartly to avoid this problem. Any help will be appreciated.
Asked
Active
Viewed 4,892 times
3
-
1What would you say is your current rate of `GET` and `PUT` (and any other) requests per second? – Michael - sqlbot Jan 25 '19 at 17:26
-
This is a possible duplicate of https://stackoverflow.com/questions/52443839/s3-what-exactly-is-a-prefix-and-what-ratelimits-apply/52445252. – ingomueller.net Oct 30 '19 at 09:52
1 Answers
1
You need to distribute the load across many s3 prefixes.
Amazon S3 automatically scales to high request rates. For example, your application can achieve at least 3,500 PUT/POST/DELETE and 5,500 GET requests per second per prefix in a bucket. There are no limits to the number of prefixes in a bucket. It is simple to increase your read or write performance exponentially. For example, if you create 10 prefixes in an Amazon S3 bucket to parallelize reads, you could scale your read performance to 55,000 read requests per second.
Check here

Igor K.
- 837
- 5
- 7
-
Tried it, but still facing same issue. Our current implementation is designed in same way as you have mentioned. – hashed_name Jan 28 '19 at 10:19
-
You may need to split your data in more prefixes, and make sure it is truly random prefixes. It will be useful if you share a bit more information about current requests rate you are being throttled at processing 50GB of data. And how much this 50GB of data is split across different prefixes. – Igor K. Jan 28 '19 at 11:16
-
2Random prefixes are not required since 2018. See here https://aws.amazon.com/about-aws/whats-new/2018/07/amazon-s3-announces-increased-request-rate-performance/ – kreuzerkrieg Oct 17 '19 at 12:55