I have successfully run the prediction using gcloud command line. I am trying to run Python script to run the prediction. But I am facing the error.
Prediction failed: Error during model execution: AbortionError(code=StatusCode.INVALID_ARGUMENT, details="assertion failed: [Unable to decode bytes as JPEG, PNG, GIF, or BMP] [[Node: map/while/decode_image/cond_jpeg/cond_png/cond_gif/Assert_1/Assert = Assert[T=[DT_STRING], summarize=3, _device="/job:localhost/replica:0/task:0/device:CPU:0"](map/while/decode_image/cond_jpeg/cond_png/cond_gif/is_bmp, map/while/decode_image/cond_jpeg/cond_png/cond_gif/Assert_1/Assert/data_0)]]")
from oauth2client.client import GoogleCredentials
from googleapiclient import discovery
from googleapiclient import errors
PROJECTID = 'ai-assignment-185606'
projectID = 'projects/{}'.format(PROJECTID)
modelName = 'food_model'
modelID = '{}/models/{}/versions/{}'.format(projectID, modelName, 'v3')
scopes = ['https://www.googleapis.com/auth/cloud-platform']
credentials = GoogleCredentials.get_application_default()
ml = discovery.build('ml', 'v1', credentials=credentials)
with open('1.jpg', 'rb') as f:
b64_x = f.read()
import base64
import json
name = "7_5790100434_e2c3dbfdba.jpg";
with open("images/"+name, "rb") as image_file:
encoded_string = base64.b64encode(image_file.read()).decode('utf-8')
row = json.dumps({'inputs': {'b64': encoded_string}})
request_body = {"instances": row}
request = ml.projects().predict(name=modelID, body=request_body)
try:
response = request.execute()
except errors.HttpError as err:
print(err._get_reason())
if 'error' in response:
raise RuntimeError(response['error'])
print(response)
This answer suggests that the version must be same. I have checked version which is 1.4 and 1.4.1.