125

I have more than 500,000 objects on s3. I am trying to get the size of each object. I am using the following python code for that

import boto3

bucket = 'bucket'
prefix = 'prefix'

contents = boto3.client('s3').list_objects_v2(Bucket=bucket,  MaxKeys=1000, Prefix=prefix)["Contents"]

for c in contents:
    print(c["Size"])

But it just gave me the size of the top 1000 objects. Based on the documentation we can't get more than 1000. Is there any way I can get more than that?

MatthewMartin
  • 32,326
  • 33
  • 105
  • 164
tahir siddiqui
  • 1,371
  • 2
  • 8
  • 8
  • 2
    Alternatively, you could use [Amazon S3 Inventory](https://docs.aws.amazon.com/AmazonS3/latest/dev/storage-inventory.html) to obtain a daily listing of the bucket. – John Rotenstein Jan 22 '19 at 19:40

3 Answers3

219

The inbuilt boto3 Paginator class is the easiest way to overcome the 1000 record limitation of list-objects-v2. This can be implemented as follows

s3 = boto3.client('s3')

paginator = s3.get_paginator('list_objects_v2')
pages = paginator.paginate(Bucket='bucket', Prefix='prefix')

for page in pages:
    for obj in page['Contents']:
        print(obj['Size'])

For more details: https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/s3.html#S3.Paginator.ListObjectsV2

J Tasker
  • 2,291
  • 1
  • 5
  • 2
  • 5
    This was exactly what I needed to eval the current list of s3 buckets I have access to. I was wondering why they all had 1000 in them haha. – james-see Jan 05 '21 at 21:30
  • 6
    This is the ANSWER! – dvallejo May 20 '21 at 21:04
  • 1
    Is there an upper limit, since `pages` are in memory what if there are very large amount of objects. – ddttdd Jul 02 '21 at 19:00
  • 5
    I am curious about the possible difference between using `list_object_v2` and `list_object` when using the pagination wrapper. Does `_v2` offer benefits in this case? – Shan Dou Aug 04 '21 at 22:33
  • @ddttdd `pages` is an iterator and retrieves only one page at a time by following the continuation tokens (essentially a linked list), stopping when a page is retrieved that has no next continuation token – Josh Bode Jul 04 '22 at 11:27
  • I want to retrieve all objects which was modified in last 24 hours, how can we add this filter on paginate result ? there might be more than 10000 objects in my bucket which modified in last 24 hours. – Kush Patel Jul 13 '22 at 11:20
86

Use the ContinuationToken returned in the response as a parameter for subsequent calls, until the IsTruncated value returned in the response is false.

This can be factored into a neat generator function:

def get_all_s3_objects(s3, **base_kwargs):
    continuation_token = None
    while True:
        list_kwargs = dict(MaxKeys=1000, **base_kwargs)
        if continuation_token:
            list_kwargs['ContinuationToken'] = continuation_token
        response = s3.list_objects_v2(**list_kwargs)
        yield from response.get('Contents', [])
        if not response.get('IsTruncated'):  # At the end of the list?
            break
        continuation_token = response.get('NextContinuationToken')

for file in get_all_s3_objects(boto3.client('s3'), Bucket=bucket, Prefix=prefix):
    print(file['Size'])
ac24
  • 5,325
  • 1
  • 16
  • 31
AKX
  • 152,115
  • 15
  • 115
  • 172
26

If you don't NEED to use the boto3.client you can use boto3.resource to get a complete list of your files:

s3r = boto3.resource('s3')
bucket = s3r.Bucket('bucket_name')
files_in_bucket = list(bucket.objects.all())

Then to get the size just:

sizes = [f.size for f in files_in_bucket]

Depending on the size of your bucket this might take a minute.

seeiespi
  • 3,628
  • 2
  • 35
  • 37
  • 2
    is there an advantage to using resource? – crypdick Mar 14 '20 at 00:43
  • 1
    There are some methods that can be found in resource and not in client and vice-versa. However, in my experience they share a lot of the same functionality. You might be able to get the size of a bucket using client but I didn't find another way that was similar to this. – seeiespi Mar 17 '20 at 16:52
  • 1
    I am listing the objects in my path like this: s3_resource = boto3.resource('s3') source_bucket_obj = s3_resource.Bucket(source_bucket) source_objects = source_bucket_obj.objects.filter(Prefix=source_key) Are you saying that this will list all the files, even if there are more than 1000? – Melissa Guo Mar 19 '20 at 02:24
  • @MelissaGuo in my experience the `list(bucket.objects.all())` method returns all the files, even if its more than 1,000. Has that not been your experience? – seeiespi Apr 02 '20 at 15:54
  • 1
    Resources aren't thread safe, so if you're multi-threading you want to make sure to instantiate the resource individually. https://boto3.amazonaws.com/v1/documentation/api/latest/guide/resources.html – Connor Dibble Jun 05 '20 at 21:55
  • 1
    after 3000 list, this fails – saviour123 May 14 '21 at 15:14
  • 1
    I used this and objects.filter to restrict the objects to a particular Prefix – Rocco Sep 09 '22 at 15:56
  • @saviour123 Really? Isn't that a local limitation perhaps? – jtlz2 Feb 27 '23 at 11:17