0

I am trying to compile an image segmentation (U-net) model to run it on edge TPU (Coral Board). I have converted and quantized the model into .tflite and I am getting the following error when compiling with edgetpu compiler: Attempting to use a delegate that only supports static-sized tensors with a graph that has dynamic-sized tensors.

Here is the link to tflite model: https://drive.google.com/file/d/1a62QtUfbRgNFwhOoO48PDY9kDq0RyHZY/view?usp=sharing

I have inspected the converted tflite model with Netron Netron model image and I did not find any dynamic sized tensors nor control-flow ops. I request you to kindly check the Netron image and see if I missed out anything? As per the Netron visualization, the model seems to have no dynamic tensors. Is there anything else I can do to compile the model successfully?

The model is based on the oxford_pets model: https://keras.io/examples/vision/oxford_pets_image_segmentation/

Kindly help me resolve the error. Thank you in advance.

Code:

import os
import tensorflow as tf
import numpy as np
from IPython.display import Image, display
from tensorflow.keras.preprocessing.image import load_img
import tensorflow_datasets as tfds
import PIL
from PIL import Image
from PIL import ImageOps
import matplotlib.pyplot as plt
from tensorflow import keras
import numpy as np
from numpy import asarray
from tensorflow.keras.preprocessing.image import load_img

# get all images and display the names 
input_img_paths=[]
target_img_paths=[]
for job in jobs:    
    input_dir = "jobs/"+job+"/trainLaser"
    #print(input_dir)
    target_dir = "jobs/"+job+"/odg"
    img_size = (750, 750)
    num_classes = 2
    batch_size = 20

    input_img = sorted(
        [
            os.path.join(input_dir, fname)
            for fname in os.listdir(input_dir)
            if fname.endswith(".png") and not fname.startswith(".")
        ]
    )

    target_img = sorted(
        [
            os.path.join(target_dir, fname)
            for fname in os.listdir(target_dir)
            if fname.endswith(".png") and not fname.startswith(".")
        ]
    )
    input_img_paths=input_img_paths+input_img
    target_img_paths=target_img_paths+target_img

print("Number of samples input:", len(input_img_paths))
print("Number of samples target:", len(target_img_paths))
try:
    input_img.remove('.ipynb_checkpoints')
except:
    pass
try:
    target_img.remove('.ipynb_checkpoints')
except:
    pass
for input_path, target_path in zip(input_img_paths[:10], target_img_paths[:10]):
    print(input_path, "|", target_path)


#prepare sequence class to load and vectorise batches of data
from tensorflow import keras
import numpy as np
from tensorflow.keras.preprocessing.image import load_img

# class, that generates sequence for training and test, changes labels to 0 & 1
class OxfordPets(keras.utils.Sequence):
    """Helper to iterate over the data (as Numpy arrays)."""

    def __init__(self, batch_size, img_size, input_img_paths, target_img_paths,class_weights,n_classes=2):
        self.batch_size = batch_size
        self.img_size = img_size
        self.input_img_paths = input_img_paths
        self.target_img_paths = target_img_paths
        self.n_classes=n_classes
        self.class_weights=class_weights

    def __len__(self):
        return len(self.target_img_paths) // self.batch_size

    def __getitem__(self, idx):
        """Returns tuple (input, target) correspond to batch #idx."""
        i = idx * self.batch_size
        batch_input_img_paths = self.input_img_paths[i : i + self.batch_size]
        batch_target_img_paths = self.target_img_paths[i : i + self.batch_size]
        x = np.zeros((self.batch_size,) + self.img_size + (1,), dtype="uint8")
        for j, path in enumerate(batch_input_img_paths):
            img = load_img(path, target_size=self.img_size, color_mode="grayscale")
            x[j] = np.expand_dims(img, 2)
            #x[j] = img
        y = np.zeros((self.batch_size,) + self.img_size + (1,), dtype="uint8")
        for j, path in enumerate(batch_target_img_paths):
            img = load_img(path, target_size=self.img_size, color_mode="grayscale")
            y[j] = np.expand_dims(img, 2)
            # Ground truth labels are 1, 2, 3. Subtract one to make them 0, 1, 2:
            y[j] -= 1
        sample_weights = np.take(np.array(self.class_weights), np.round(y[:, :, :, 0]).astype('int'))
        return x, y, sample_weights


#prepare U-net Xceptione style model
from tensorflow.keras import layers
mirrored_strategy = tf.distribute.MirroredStrategy()

def get_model(img_size, num_classes):
    with mirrored_strategy.scope():
        img_size = (750, 750)
        num_classes = 2
        batch_size = 20

        inputs = keras.Input(shape=img_size + (1,))
        x = layers.ZeroPadding2D(padding=1)(inputs)
        ### [First half of the network: downsampling inputs] ###

        # Entry block
        x = layers.Conv2D(32, 3, strides=2, padding="same")(x)

        x = layers.BatchNormalization()(x)
        x = layers.Activation("relu")(x)

        previous_block_activation = x  # Set aside residual

        # Blocks 1, 2, 3 are identical apart from the feature depth.
        for filters in [64, 128, 256]:
            x = layers.Activation("relu")(x)
            x = layers.SeparableConv2D(filters, 3, padding="same")(x)
            x = layers.BatchNormalization()(x)

            x = layers.Activation("relu")(x)
            x = layers.SeparableConv2D(filters, 3, padding="same")(x)
            x = layers.BatchNormalization()(x)

            x = layers.MaxPooling2D(3, strides=2, padding="same")(x)

            # Project residual
            residual = layers.Conv2D(filters, 1, strides=2, padding="same")(
                previous_block_activation
            )
            x = layers.add([x, residual])  # Add back residual
            previous_block_activation = x  # Set aside next residual

        ### [Second half of the network: upsampling inputs] ###

        for filters in [256, 128, 64, 32]:
            x = layers.Activation("relu")(x)
            x = layers.Conv2DTranspose(filters, 3, padding="same")(x)
            x = layers.BatchNormalization()(x)

            x = layers.Activation("relu")(x)
            x = layers.Conv2DTranspose(filters, 3, padding="same")(x)
            x = layers.BatchNormalization()(x)

            x = layers.UpSampling2D(2)(x)

            # Project residual
            residual = layers.UpSampling2D(2)(previous_block_activation)
            residual = layers.Conv2D(filters, 1, padding="same")(residual)
            x = layers.add([x, residual])  # Add back residual
            previous_block_activation = x  # Set aside next residual

        # Add a per-pixel classification layer
        x=layers.Cropping2D(cropping=1)(x)
        outputs = layers.Conv2D(num_classes, 3, activation="softmax", padding="same")(x)

        # Define the model

        model = keras.Model(inputs, outputs)
        return model


# Free up RAM in case the model definition cells were run multiple times
keras.backend.clear_session()

# Build model
model = get_model(img_size, num_classes)
model.summary()


#creating train and validation split
from sklearn.model_selection import train_test_split 
train_input_img_paths, val_input_img_paths,train_target_img_paths, val_target_img_paths = train_test_split(input_img_paths, target_img_paths, train_size=0.8, random_state=42)
print(len(train_input_img_paths))
print(len(val_input_img_paths))

# Instantiate data Sequences for each split
train_gen = OxfordPets(batch_size, img_size, train_input_img_paths, train_target_img_paths,class_weights=[1,2])
val_gen = OxfordPets(batch_size, img_size, val_input_img_paths, val_target_img_paths,class_weights=[1,2])
print(len(train_gen))


class UpdatedMeanIoU(tf.keras.metrics.MeanIoU):
    def __init__(self,
               y_true=None,
               y_pred=None,
               num_classes=None,
               name=None,
               dtype=None):
        super(UpdatedMeanIoU, self).__init__(num_classes = num_classes,name=name, dtype=dtype)

    def update_state(self, y_true, y_pred, sample_weight=None):
        y_pred = tf.math.argmax(y_pred, axis=-1)
        return super().update_state(y_true, y_pred, sample_weight)


def add_sample_weights(image, label):
    # The weights for each class, with the constraint that:
    #     sum(class_weights) == 1.0
    class_weights = tf.constant([1.0, 2.0])
    class_weights = class_weights/tf.reduce_sum(class_weights)

    # Create an image of `sample_weights` by using the label at each pixel as an 
    # index into the `class weights` .
    sample_weights = tf.gather(class_weights, indices=tf.cast(label, tf.int32))
    return image, label, sample_weights


def plot_metrics(history):
    metrics = ['loss', 'accuracy', 'IOU']
    for n, metric in enumerate(metrics):
        name = metric.replace("_"," ").capitalize()
        plt.subplot(2,2,n+1)
        plt.plot(history.epoch, history.history[metric], color=colors[0], label='Train')
        plt.plot(history.epoch, history.history['val_'+metric],
                 color=colors[0], linestyle="--", label='Val')
        plt.xlabel('Epoch')
        plt.ylabel(name)
        if metric == 'loss':
            plt.ylim([0, plt.ylim()[1]])
        elif metric == 'auc':
            plt.ylim([0.8,1])
        else:
            plt.ylim([0,1])

        plt.legend();


# Configure the model for training.
# We use the "sparse" version of categorical_crossentropy
# because our target data is integers.
with mirrored_strategy.scope():
    model.compile(optimizer="rmsprop", loss='sparse_categorical_crossentropy',sample_weight_mode="temporal",metrics=['accuracy'])

    callbacks = [
        keras.callbacks.ModelCheckpoint(filepath="models/4rakete_schnecke_giveaways_weighted",save_weights_only=True,monitor='val_accuracy',mode='max',save_best_only=True),
        #keras.callbacks.TensorBoard(log_dir=logdir, histogram_freq=1,write_graph=True, write_images=True)
    ]

    # Train the model, doing validation at the end of each epoch.
    epochs = 10
    history=model.fit(train_gen, epochs=epochs, validation_data=val_gen, callbacks=callbacks)

model.save('models/4rakete_schnecke_giveaways_weighted')
plot_metrics(history)
keras.backend.clear_session()

Tflite conversion code:

import random
input_imgs = random.shuffle(os.listdir('jobs'))
img_size = (750, 750)
batch_size = 20

train_gen = OxfordPets(batch_size, img_size, input_imgs, [])

#-> representative dataset generator function
def representative_data_gen():
    for input_val in tf.data.Dataset.from_tensor_slices(train_gen).batch(1).take(100):
    #print(input_val)
        yield [tf.cast(input_val, tf.float32)]

#-> set converter features
converter = tf.lite.TFLiteConverter.from_saved_model('models/4rakete_schnecke_giveaways_weighted/')
#converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.representative_dataset = representative_data_gen

#-> Ensure that if any operations can't be quantized, the converter throws an error
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
# Set the input and output tensors to uint8
converter.target_spec.supported_types = [tf.int8]
converter.inference_input_type = tf.uint8
converter.inference_output_type = tf.uint8

#-> convert the model
tflite_model_quant = converter.convert()
print('conversion successful')

# class, that generates sequence for training and test, changes labels to 0 & 1
class OxfordPets(keras.utils.Sequence):
    """Helper to iterate over the data (as Numpy arrays)."""

    def __init__(self, batch_size, img_size, input_img_paths, target_img_paths,n_classes=2):
        self.batch_size = batch_size
        self.img_size = img_size
        self.input_img_paths = input_img_paths
        self.target_img_paths = target_img_paths
        self.n_classes=n_classes
        #self.class_weights=class_weights

    def __len__(self):
        return len(self.target_img_paths) // self.batch_size

    def __getitem__(self, idx):
        """Returns tuple (input, target) correspond to batch #idx."""
        i = idx * self.batch_size
        batch_input_img_paths = self.input_img_paths[i : i + self.batch_size]
        batch_target_img_paths = self.target_img_paths[i : i + self.batch_size]
        x = np.zeros((self.batch_size,) + self.img_size + (1,), dtype="uint8")
        for j, path in enumerate(batch_input_img_paths):
            img = load_img(path, target_size=self.img_size, color_mode="grayscale")
            x[j] = np.expand_dims(img, 2)
            #x[j] = img
        y = np.zeros((self.batch_size,) + self.img_size + (1,), dtype="uint8")
        for j, path in enumerate(batch_target_img_paths):
            img = load_img(path, target_size=self.img_size, color_mode="grayscale")
            y[j] = np.expand_dims(img, 2)
            # Ground truth labels are 1, 2, 3. Subtract one to make them 0, 1, 2:
            y[j] -= 1
        #sample_weights = np.take(np.array(self.class_weights), np.round(y[:, :, :, 0]).astype('int'))
        return x, y
  • Hello and welcome to stackoverflow! Do not have opportunity to check your drive files, so may you please share image of input node of lite model from Netron and your to tflite conversion code please? – Alex K. Sep 29 '22 at 08:34
  • Hallo Alex, thanks for replying. The node image can be found in the Netron image link. The tflite conversion code is pasted in the above post. – Rokngreat Spy Sep 29 '22 at 14:37
  • I have reused the oxford_pets class to generate training dataset, out of which the representative dataset is taken. The class code is taken from the original oxford_pets model https://keras.io/examples/vision/oxford_pets_image_segmentation/ – Rokngreat Spy Sep 29 '22 at 14:45
  • Am I right that you have already tried to use batch_size = 1 during quantization? I have scrolled through netron shot, it looks like pretty normal. – Alex K. Sep 29 '22 at 14:50
  • The batch size of the input dataset is kept same as the batch size for the model = 20. But the batch size taken in by the representative datagen function is 1 as seen in the above code snippet under 'Tflite conversion code' – Rokngreat Spy Sep 30 '22 at 08:40
  • Looked through your github issue and found similar to yours: check [this reply](https://github.com/google-coral/edgetpu/issues/453#issuecomment-939091903) – Alex K. Sep 30 '22 at 09:00
  • Have you checked this issue in 2.11 or nightly version lately. Please share a colab gist for further assitance. –  Jan 12 '23 at 15:29

0 Answers0