I am trying to run a SavedModel using the C-API.
When it comes to running TF_SessionRun
it always fails on various input nodes with the same error.
TF_SessionRun status: 3:Input to reshape is a tensor with 6 values, but the requested shape has 36
TF_SessionRun status: 3:Input to reshape is a tensor with 19 values, but the requested shape has 361
TF_SessionRun status: 3:Input to reshape is a tensor with 3111 values, but the requested shape has 9678321
...
As can be seen, the number of requested shape values is always the square of the expected input size. It's quite odd.
The model runs fine with the saved_model_cli
command.
The inputs are all either scalar DT_STRING or DT_FLOATs, I'm not doing image recogition.
Here's the output of that command:
signature_def['serving_default']:
The given SavedModel SignatureDef contains the following input(s):
inputs['f1'] tensor_info:
dtype: DT_STRING
shape: (-1)
name: f1:0
inputs['f2'] tensor_info:
dtype: DT_STRING
shape: (-1)
name: f2:0
inputs['f3'] tensor_info:
dtype: DT_STRING
shape: (-1)
name: f3:0
inputs['f4'] tensor_info:
dtype: DT_FLOAT
shape: (-1)
name: f4:0
inputs['f5'] tensor_info:
dtype: DT_STRING
shape: (-1)
name: f5:0
The given SavedModel SignatureDef contains the following output(s):
outputs['o1_probs'] tensor_info:
dtype: DT_DOUBLE
shape: (-1, 2)
name: output_probs:0
outputs['o1_values'] tensor_info:
dtype: DT_STRING
shape: (-1, 2)
name: output_labels:0
outputs['predicted_o1'] tensor_info:
dtype: DT_STRING
shape: (-1, 1)
name: output_class:0
Method name is: tensorflow/serving/predict
Any clues into what's going on are much appreciated. The saved_model.pb file is coming from AutoML, my code is merely querying that model. I don't change the graph.