Its very unlikely the 503 is because S3 is down, its almost never, ever 'down'. More likely you account has been throttled because you are making too many requests in too short a period.
You should either slow down your requests, if you control the speed, or I would recommend picking better keys, i.e. keys that don't all start with the same prefix - a nice wide range of keys will allow s3 to spread the workload better.
From Jeff Barr's blog post:
Further, keys in S3 are partitioned by prefix.
As we said, S3 has automation that continually looks for areas of the
keyspace that need splitting. Partitions are split either due to
sustained high request rates, or because they contain a large number
of keys (which would slow down lookups within the partition). There is
overhead in moving keys into newly created partitions, but with
request rates low and no special tricks, we can keep performance
reasonably high even during partition split operations. This split
operation happens dozens of times a day all over S3 and simply goes
unnoticed from a user performance perspective. However, when request
rates significantly increase on a single partition, partition splits
become detrimental to request performance. How, then, do these heavier
workloads work over time? Smart naming of the keys themselves!
We frequently see new workloads introduced to S3 where content is
organized by user ID, or game ID, or other similar semi-meaningless
identifier. Often these identifiers are incrementally increasing
numbers, or date-time constructs of various types. The unfortunate
part of this naming choice where S3 scaling is concerned is two-fold:
First, all new content will necessarily end up being owned by a single
partition (remember the request rates from above…). Second, all the
partitions holding slightly older (and generally less ‘hot’) content
get cold much faster than other naming conventions, effectively
wasting the available operations per second that each partition can
support by making all the old ones cold over time.
The simplest trick that makes these schemes work well in S3 at nearly
any request rate is to simply reverse the order of the digits in this
identifier (use seconds of precision for date or time-based
identifiers). These identifiers then effectively start with a random
number – and a few of them at that – which then fans out the
transactions across many potential child partitions. Each of those
child partitions scales close enough to linearly (even with some
content being hotter or colder) that no meaningful operations per
second budget is wasted either. In fact, S3 even has an algorithm to
detect this parallel type of write pattern and will automatically
create multiple child partitions from the same parent simultaneously –
increasing the system’s operations per second budget as request heat
is detected.
https://aws.amazon.com/blogs/aws/amazon-s3-performance-tips-tricks-seattle-hiring-event/