I am trying to serve a prediction using google cloud ml engine. I generated my model using fast-style-transfer and saved it on my google cloud ml engine's models section. For input it use float32 and so I had to convert my image in this format.
image = tf.image.convert_image_dtype(im, dtypes.float32)
matrix_test = image.eval()
Then I generated my json file for the request:
js = json.dumps({"image": matrix_test.tolist()})
Using the following code:
gcloud ml-engine predict --model {model-name} --json-instances request.json
The following error is returned:
ERROR: (gcloud.ml-engine.predict) HTTP request failed. Response: {
"error": {
"code": 400,
"message": "Request payload size exceeds the limit: 1572864 bytes.",
"status": "INVALID_ARGUMENT"
}
}
I would like to know if I can increment this limit and, if not, if there is a way to fix it with a workaround... thanks in advance!