I trained an object detection model on ML Engine and exported it by invoking:
python object_detection/export_inference_graph.py \
--input_type encoded_image_string_tensor ....
Then I successfully tested prediction locally by invoking:
gcloud ml-engine local predict --model-dir ../saved_model --json-instances=inputs.json --runtime-version=1.2
where inputs.json contains:
{"b64": "base64 encoded png image"}
When I try to create a model version on ML Engine using the following command:
gcloud ml-engine versions create ${YOUR_VERSION} --model ${YOUR_MODEL} --origin=${YOUR_GCS_BUCKET}/saved_model --runtime-version=1.2
it fails with the following message:
ERROR: (gcloud.ml-engine.versions.create) Bad model detected with error: "Error loading the model: Could not load model. "
Does ML Engine NOT support model versions of input_type=encoded_image_string_tensor
and how can I obtain more details on the error?
Creating a model version on ml-engine using an exported model with input_type=image_tensor
works fine.