I am quantizing a model. The model takes 224x224 input.
I preprocess the data with a build-in function preprocess_input()
which subtracts some center pixels.
Now when using a simple image with this preprocessing function in the representative_dataset_gen()
everything works fine
def representative_dataset_gen():
pfad='./000001.jpg'
img = cv2.imread(pfad)
img = np.expand_dims(img,0).astype(np.float32)
img = preprocess_input(img)
yield [img]
But when I use a generator function to use more than one image
def prepare(img):
img = np.expand_dims(img,0).astype(np.float32)
img = preprocess_input(img)
return arg
repDatagen=tf.keras.preprocessing.image.ImageDataGenerator(preprocessing_function=prepare)
datagen=repDatagen.flow_from_directory(folderpath,target_size=size,batch_size=1)
def representative_dataset_gen():
for _ in range(10):
img = datagen.next()
yield [img]
I get following error:
ValueError: Failed to convert value into readable tensor.
My guess: This is due to ImageDataGenerator(preprocessing_function=prepare)
. In the tensorflow description it says:
function that will be applied on each input. The function will run after the image is resized and augmented. The function should take one argument: one image (Numpy tensor with rank 3), and should output a Numpy tensor with the same shape.
I tried to adjust the shape of the img output of the "prepare" function without and with np.squeez().
This results in either (1,244,244,3) or (224,224,3). But I still get the error. I also tried tf.convert_to_tensor()
with the same error.
def prepare(img):
img = np.expand_dims(img,0).astype(np.float32)
img = preprocess_input(img, version=2)
img = np.squeeze(img)
arg = tf.convert_to_tensor(img, dtype=tf.float32)
return arg
Does anyone know how I have to prepare the output to get the correct tensor?
Thanks