2

I have successfully run the prediction using gcloud command line. I am trying to run Python script to run the prediction. But I am facing the error.

Prediction failed: Error during model execution: AbortionError(code=StatusCode.INVALID_ARGUMENT, details="assertion failed: [Unable to decode bytes as JPEG, PNG, GIF, or BMP] [[Node: map/while/decode_image/cond_jpeg/cond_png/cond_gif/Assert_1/Assert = Assert[T=[DT_STRING], summarize=3, _device="/job:localhost/replica:0/task:0/device:CPU:0"](map/while/decode_image/cond_jpeg/cond_png/cond_gif/is_bmp, map/while/decode_image/cond_jpeg/cond_png/cond_gif/Assert_1/Assert/data_0)]]")

from oauth2client.client import GoogleCredentials
from googleapiclient import discovery
from googleapiclient import errors

PROJECTID = 'ai-assignment-185606'
projectID = 'projects/{}'.format(PROJECTID)
modelName = 'food_model'
modelID = '{}/models/{}/versions/{}'.format(projectID, modelName, 'v3')

scopes = ['https://www.googleapis.com/auth/cloud-platform']
credentials = GoogleCredentials.get_application_default()
ml = discovery.build('ml', 'v1', credentials=credentials)

with open('1.jpg', 'rb') as f:
    b64_x = f.read()
import base64
import json

name = "7_5790100434_e2c3dbfdba.jpg";
with open("images/"+name, "rb") as image_file:
    encoded_string = base64.b64encode(image_file.read()).decode('utf-8')
    row = json.dumps({'inputs': {'b64': encoded_string}})

request_body = {"instances": row}

request = ml.projects().predict(name=modelID, body=request_body)
try:
    response = request.execute()
except errors.HttpError as err:
    print(err._get_reason())

if 'error' in response:
    raise RuntimeError(response['error'])

print(response)

This answer suggests that the version must be same. I have checked version which is 1.4 and 1.4.1.

Sam
  • 1,252
  • 5
  • 20
  • 43
  • Can you post the full command line, output, and mark the line in your code that corresponds to where the exception is raised because there is no way for us to know. – Oliver Apr 02 '18 at 12:41
  • Full command line means gcloud command? I run this python script without arguments. This error is actually the response returned after the line `response = request.execute()` – Sam Apr 03 '18 at 09:08

2 Answers2

3

According to https://cloud.google.com/ml-engine/docs/v1/predict-request, the row should be a list of data. Each data can be a value, a JSON object, or a list/nested list:

{
  "instances": [
    <value>|<simple/nested list>|<object>,
    ...
  ]
}

Instead, your row is a text string representing JSON (i.e. the recipient will have to json.loads(row) to obtain JSON). Try this instead:

instances = []
with open("images/"+name, "rb") as image_file:
    encoded_string = base64.b64encode(image_file.read()).decode('utf-8')
    instances.append({'b64': encoded_string})

request_body = {"instances": instances}
Oliver
  • 27,510
  • 9
  • 72
  • 103
  • RuntimeError: Invalid request. The service expects the request to be a valid JSON object with a list-valued attribute called `instances`, i.e. `{"instances": [...]}`. The received request was: "{\"instances\": {\"inputs\": {\"b64\": \".....\"}}}" – Sam Apr 10 '18 at 13:32
  • @samtew i updated answer based on your comment, let me know if still doesn't work – Oliver Apr 12 '18 at 17:48
1

As per the documentation here it looks like format should be following:
{"instances": [{"b64": "X5ad6u"}, {"b64": "IA9j4nx"}]}.
But I get following error.
RuntimeError: Prediction failed: unknown error.

I had to add image_bytes to make it work as per this post. Here is how it looks:
{"instances": [{"image_bytes": {"b64": encoded_string}, "key": "0"}]

Code snippet below:

@app.route('/predict', methods=['POST'])
def predict():
    if 'images' in request.files:
        file = request.files['images']
        image_path = save_image(file)
        # Convert image to base64
        encoded_string = base64.b64encode(open(file=image_path, mode="rb").read()).decode('utf-8')

        service = discovery.build('ml', 'v1', credentials=credentials)
        name = 'projects/{}/models/{}'.format('my-project-name', 'my-model-name')
        name += '/versions/{}'.format('v1')

        response = service.projects().predict(
            name=name,
            body= {"instances": [{"image_bytes": {"b64": encoded_string}, "key": "0"}]}

        ).execute()

        if 'error' in response:
            raise RuntimeError(response['error'])

        print(response['predictions'])
        return jsonify({'result': response['predictions']})
    else:
        return jsonify({'result': 'Since Image tag is empty, cannot predict'})
RC_02
  • 3,146
  • 1
  • 18
  • 20