In response to my comment.
You are chunking the upload POST or GET request and limiting it as you go, it is done on the server side. I would do the limiting on the upload side (not the response side), because that is what you are trying to control. So I would suggest this is a Python question since your upload handler is in Python.
Notice:
- In the example, it says read a certain size of the payload when the upload form file is selected. You grab the size.
- Then you limit based on the initial size.
- You will be reading chunks, then will be returning where you left off.
- Set a sleep handler, to chunk it at a certain speed.
- Then callback the function with the left off chunk position and read position, to continue.
Limit the download GET speed, can be applied to POST also.
aiohttp Python example: https://github.com/aio-libs/aiohttp/issues/2638
import aiohttp
import asyncio
async def read(self, n: int=-1) -> bytes:
"""Read up to 'n' bytes of the response payload.
If 'n' is -1 (default), read the entire payload.
"""
if self._body is None:
try:
if n is -1:
self._body = await self.content.read()
else:
chunks = []
i = 0
while i < n:
chunk = await self.content.read(n=n - i)
if not chunk:
break
chunks.append(chunk)
i += len(chunk)
self._body = b''.join(chunks)
for trace in self._traces:
await trace.send_response_chunk_received(self._body)
except BaseException:
self.close()
raise
elif self._released:
raise aiohttp.ClientConnectionError('Connection closed')
return self._body
async def f(url, n=-1):
async with aiohttp.ClientSession() as session:
async with session.get(url) as response:
content = await read(response, n=n)
print(len(content))
URL = 'https://upload.wikimedia.org/wikipedia/commons/thumb/7/71/2010-kodiak-bear-1.jpg/320px-2010-kodiak-bear-1.jpg'
asyncio.run(f(URL, 10_000_000)) # 10 MB
asyncio.run(f(URL, 100))
asyncio.run(f(URL))