0

right now I'm trying to convert a SavedModel to TFLite for use on a raspberry pi. The model is MobileNet Object Detection trained on a custom dataset. The SavedModel works perfectly, and retains the same shape of (1, 150, 150, 3). However, when I convert it to a TFLite model using this code:

import tensorflow as tf

saved_model_dir = input("Model dir: ")

# Convert the model
converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir) # path to the SavedModel directory
tflite_model = converter.convert()

# Save the model.
with open('model.tflite', 'wb') as f:
  f.write(tflite_model)

And run this code to run the interpreter:

import numpy as np
import tensorflow as tf
from PIL import Image

from os import listdir
from os.path import isfile, join

from random import choice, random

# Load the TFLite model and allocate tensors.
interpreter = tf.lite.Interpreter(model_path="model.tflite")
interpreter.allocate_tensors()

# Get input and output tensors.
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()


input_shape = input_details[0]['shape']
print(f"Required input shape: {input_shape}")

I get an input shape of [1 1 1 3], therefore I can't use a 150x150 image as input.

I'm using Tensorflow 2.4 on Python 3.7.10 with Windows 10.

How would I fix this?

Zium
  • 163
  • 9

2 Answers2

1

You can rely on TFLite converter V1 API to set input shapes. Please check out the input_shapes argument in https://www.tensorflow.org/api_docs/python/tf/compat/v1/lite/TFLiteConverter.

Jae sung Chung
  • 835
  • 1
  • 6
  • 7
0

How about calling resize_tensor_input() before calling allocate_tensors() ?

interpreter.resize_tensor_input(0, [1, 150, 150, 3], strict=True)
interpreter.allocate_tensors()
Terry Heo
  • 149
  • 3