We have a model exported from CustomVision.ai (it supports exporting TensorFlow .pb, TensorFlow Lite .tflite, Saved Model .pp formats).
We would like to integrate this model into an existing app that uses the TFLite Object Detection API, which expects these inputs and outputs:
--input_arrays=normalized_input_image_tensor \
--output_arrays='TFLite_Detection_PostProcess','TFLite_Detection_PostProcess:1','TFLite_Detection_PostProcess:2','TFLite_Detection_PostProcess:3' \
The model seems to have very different inputs and outputs:
Inputs array: Placeholder
name: Placeholder
type: float32[1,416,416,3]
quantization: 0 ≤ q ≤ 255
location: 0
Outputs array: model_outputs
name: model_outputs
type: float32[1,13,13,45]
location: 44
If I run the "tflite_convert" command
tflite_convert \
--graph_def_file=model.pb \
--output_file=detect.tflite \
--input_shapes=1,416,416,3 \
--input_arrays=normalized_input_image_tensor \
--output_arrays='TFLite_Detection_PostProcess','TFLite_Detection_PostProcess:1','TFLite_Detection_PostProcess:2','TFLite_Detection_PostProcess:3' \
--inference_type=FLOAT \
--allow_custom_ops
I get:
ValueError: Invalid tensors 'normalized_input_image_tensor' were found.
Any idea how to get this to work? I've been digging around all day and have only come up dry...any help would be appreciated! Thanks!