11

An answer here (Size of raw response in bytes) says :

Just take the len() of the content of the response:

>>> response = requests.get('https://github.com/')
>>> len(response.content)
51671

However doing that does not get the accurate content length. For example check out this python code:

import sys
import requests

def proccessUrl(url):
    try:
        r = requests.get(url)
        print("Correct Content Length: "+r.headers['Content-Length'])
        print("bytes of r.text       : "+str(sys.getsizeof(r.text)))
        print("bytes of r.content    : "+str(sys.getsizeof(r.content)))
        print("len r.text            : "+str(len(r.text)))
        print("len r.content         : "+str(len(r.content)))
    except Exception as e:
        print(str(e))

#this url contains a content-length header, we will use that to see if the content length we calculate is the same.
proccessUrl("https://stackoverflow.com")

If we try and manually calculate the content length and compare it to what is in the header, we get an answer that is much larger?

Correct Content Length: 51504
bytes of r.text       : 515142
bytes of r.content    : 257623
len r.text            : 257552
len r.content         : 257606

Why does len(r.content) not return the correct content length? And how can we manually calculate it accurately if the header is missing?

Jonathan Laliberte
  • 2,672
  • 4
  • 19
  • 44
  • 6
    `sys.getsizeof` does **not produce the length of the data**. Instead, it gives you the memory footprint of the internal Python data structures, which is related but far from the same thing. Do not use `sys.getsizeof` in this context. See [What is the difference between len() and sys.getsizeof() methods in python?](//stackoverflow.com/q/17574076) – Martijn Pieters Jun 12 '18 at 21:01

1 Answers1

26

The Content-Length header reflects the body of the response. That's not the same thing as the length of the text or content attributes, because the response could be compressed. requests decompresses the response for you.

You'd have to bypass a lot of internal plumbing to get the original, compressed, raw content, and then you have to access some more internals if you want the response object to still work correctly. The 'easiest' method is to enable streaming, then reading from the raw socket:

from io import BytesIO

r = requests.get(url, stream=True)
# read directly from the raw urllib3 connection
raw_content = r.raw.read()
content_length = len(raw_content)
# replace the internal file-object to serve the data again
r.raw._fp = BytesIO(raw_content)

Demo:

>>> import requests
>>> from io import BytesIO
>>> url = "https://stackoverflow.com"
>>> r = requests.get(url, stream=True)
>>> r.headers['Content-Encoding'] # a compressed response
'gzip'
>>> r.headers['Content-Length']   # the raw response contains 52055 bytes of compressed data
'52055'
>>> r.headers['Content-Type']     # we are served UTF-8 HTML data
'text/html; charset=utf-8'
>>> raw_content = r.raw.read()
>>> len(raw_content)              # the raw content body length
52055
>>> r.raw._fp = BytesIO(raw_content)
>>> len(r.content)    # the decompressed binary content, byte count
258719
>>> len(r.text)       # the Unicode content decoded from UTF-8, character count
258658

This reads the full response into memory, so don't use this if you expect large responses! In that case, you could instead use shutil.copyfileobj() to copy the data from the r.raw file to a spooled temporary file (which will switch to an on-disk file once a certain size is reached), get the file size of that file, then stuff that file onto r.raw._fp.

A function that adds a Content-Type header to any request that is missing that header would look like this:

import requests
import shutil
import tempfile

def ensure_content_length(
    url, *args, method='GET', session=None, max_size=2**20,  # 1Mb
    **kwargs
):
    kwargs['stream'] = True
    session = session or requests.Session()
    r = session.request(method, url, *args, **kwargs)
    if 'Content-Length' not in r.headers:
        # stream content into a temporary file so we can get the real size
        spool = tempfile.SpooledTemporaryFile(max_size)
        shutil.copyfileobj(r.raw, spool)
        r.headers['Content-Length'] = str(spool.tell())
        spool.seek(0)
        # replace the original socket with our temporary file
        r.raw._fp.close()
        r.raw._fp = spool
    return r

This accepts an existing session, and lets you specify the request method too. Adjust max_size as needed for your memory constraints. Demo on https://github.com, which lacks a Content-Length header:

>>> r = ensure_content_length('https://github.com/')
>>> r
<Response [200]>
>>> r.headers['Content-Length']
'14490'
>>> len(r.content)
54814

Note that if there is no Content-Encoding header present or the value for that header is set to identity, and the Content-Length is available, then just you can rely on Content-Length being the full size of the response. That's because then there is obviously no compression applied.

As a side note: you should not use sys.getsizeof() if what your are after is the length of a bytes or str object (the number of bytes or characters in that object). sys.getsizeof() gives you the internal memory footprint of a Python object, which covers more than just the number of bytes or characters in that object. See What is the difference between len() and sys.getsizeof() methods in python?

Martijn Pieters
  • 1,048,767
  • 296
  • 4,058
  • 3,343
  • Ahh that makes a lot more sense man, thank you. So len of `r.raw.read()` was what i needed. I tried raw earlier but didn't think to use `read()` with it. The link on the difference between `len` and `sys.getsizeof` is also very helpful. Cheers. – Jonathan Laliberte Jun 12 '18 at 21:12
  • 1
    @JonathanLaliberte: the internals of `requests` are such that there is no option to disable the decompression there. – Martijn Pieters Jun 12 '18 at 21:55