I am working on an analytic API for small ML project. I have created an endpoint, which uses Flask's stream_with_context
function like in the example below:
def post():
# some logic
[...]
try:
res = get_data_from_elastic()
def generate():
for hit in res:
resp_dict = {
"timestamp": hit.timestamp,
"user_id": hit.user_id,
"node_id": hit.node_id,
"loc": hit.loc,
"is_target": hit.is_target
}
yield json.dumps(resp_dict) + '\n'
return Response(stream_with_context(generate()), status=200, mimetype="application/json")
# except:
# some exception handling
It's not an exact extract from my code, but the generator works the same. I use Python requests to connect to the API, with the following code:
response = requests.post(analytic_api_url,
headers={'Authorization': token},
data={'since': since,'till': till})
When I connect to the API and download small amounts of data at once, everything works fine. Unfortunately, when I try to download bigger amount of data at once, I am getting the following error:
ChunkedEncodingError: ("Connection broken: InvalidChunkLength(got length b'', 0 bytes read)", InvalidChunkLength(got length b'', 0 bytes read))
As the error is about chunked encoding
, I've checked things like setting Transfer-Encoding: chunked
header in the server's response, I've also tried using stream=True
parameter from the requests
library - none of these solutions worked.
How should I deal with this problem? Should I explicitly set some other Transfer-Encoding
header, or create another generator for my API?
Thank you for your help!