I have trained a EfficientDet-d0 model with the TF2 API in order to detect custom images. This works just fine. Have saved the checkpoints, pipeline.config, and the save_model.pb files and can reload the model using these files. The issue is that I have not been able to convert this model to the tflite format in order to use it on a RaspberryPi. Attempted the conversion using the TF documentation (https://www.tensorflow.org/lite/guide/inference#load_and_run_a_model_in_python) in a Google Colab notebook: https://colab.research.google.com/drive/1cnJF85aPz5VMyEJ0gzsdB3zjvXaRCG_r?usp=sharing
The conversion itself seems to be working, however something is wrong when I set the interpreter because all values are 0 and the input shape is [1 1 1 3]:
interpreter = tf.lite.Interpreter(TFLITE_FILE_PATH)
print(interpreter.get_input_details())
[{'name': 'serving_default_input_tensor:0', 'index': 0, 'shape': array([1, 1, 1, 3], dtype=int32), 'shape_signature': array([ 1, -1, -1, 3], dtype=int32), 'dtype': <class 'numpy.uint8'>, 'quantization': (0.0, 0), 'quantization_parameters': {'scales': array([], dtype=float32), 'zero_points': array([], dtype=int32), 'quantized_dimension': 0}, 'sparsity_parameters': {}}]
interpreter.allocate_tensors()
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
input_shape = input_details[0]['shape']
print(input_shape)
[1 1 1 3]
When I then try to set a tensor I get the following error
input_data = np.array(np.random.random_sample(input_shape), dtype=np.float32)
interpreter.set_tensor(input_details[0]['index'], input_data)
ValueError: Cannot set tensor: Got value of type FLOAT32 but expected type UINT8 for input 0, name: serving_default_input_tensor:0
Anyone knows how I can correctly convert the model or what I am doing wrong? Many thanks!