3

I built a custom model in .h5 from Matterport's MaskRCNN implementation. I managed to save the full model and not the weights alone using model.keras_model.save(), and assume it worked correctly.

I need to convert this model to ONNX to inference in Unity Barracuda, and I have been hitting several errors along the way. I tried:

T1. .h5 to ONNX using this tutorial and the keras2onnx package, and I hit an error at:

model = load_model('model.h5')

Error:
ValueError: Unknown layer: BatchNorm

T2. Defining custom layers using this GitHub code:

 model = keras.models.load_model(r'model.h5', custom_objects={'BatchNorm':BatchNorm, 
'tf':tf, 'ProposalLayer':ProposalLayer, 
'PyramidROIAlign1':PyramidROIAlign1, 'PyramidROIAlign2':PyramidROIAlign2,
'DetectionLayer':DetectionLayer}, compile=False)

Error:
ValueError: No model found in config file.
ValueError: Unknown layer: PyramidROIAlign

T3. .h5 to .pb (frozen graph) and .pbtxt, and then from .pb to ONNX using tf2onnx after finding input and output nodes (seems to be only one of each?):

assert d in name_to_node, "%s is not in graph" % d
AssertionError: output0 is not in graph

T4. .h5 to SavedModel using tf-serving code from here and then python -m tf2onnx.convert --saved-model exported_models\coco_mrcnn\3 --opset 15 --output "model.onnx" to convert to ONNX:

ValueError: make_sure failure: variable mrcnn_detection/map/while/Enter already exists as state variable.

TLDR: Is there a way to convert my .h5 model to ONNX through any direct/indirect means? I have been stuck on this for days!

Thanks in advance.

Edit 1: It seems that keras.models.load_model() throws the first two errors - wondering if there is a way I can work with the .pb/.pbtxt model, or a way around without using load_model(), or a way to solve the load_model() issue?

Edit 2:

Code for T1: custom dataset modified from Matterport's MaskRCNN implementation

Code for T4

Caife
  • 415
  • 1
  • 5
  • 16

2 Answers2

4

Try converting it to saved model format and then to onnx.

import numpy as np
import tensorflow as tf
from tensorflow import keras


def get_model():
    # Create a simple model.
    inputs = keras.Input(shape=(32,))
    outputs = keras.layers.Dense(1)(inputs)
    model = keras.Model(inputs, outputs)
    model.compile(optimizer="adam", loss="mean_squared_error")
    return model

model = get_model()
# Train the model.
test_input = np.random.random((128, 32))
test_target = np.random.random((128, 1))
model.fit(test_input, test_target)

# Calling `save('my_model.h5')` creates a h5 file `my_model.h5`.
model.save("my_h5_model.h5")

# It can be used to reconstruct the model identically.
model = keras.models.load_model("my_h5_model.h5")
tf.saved_model.save(model, "tmp_model")

Then convert it using tf2onnx.

python3 -m tf2onnx.convert --saved-model tmp_model --output "model.onnx"
harry
  • 970
  • 6
  • 25
  • Hi! keras.models.load_model() seems to be the function to throw errors in both (1) and (2) – Caife Sep 01 '21 at 03:06
  • It could be a problem with your custom model, I've updated my answer with a sample model. – harry Sep 01 '21 at 03:10
  • Thanks! I tried [another way](https://github.com/moganesyan/tensorflow_model_deployment) to convert to savedModel using your suggestion. While converting that to onnx using tf2onnx.convert, I get this error - ValueError: make_sure failure: variable mrcnn_detection/map/while/Enter already exists as state variable. – Caife Sep 01 '21 at 04:58
  • could you provide a link to your full code? – harry Sep 01 '21 at 05:05
  • Sure - I have edited the question to include the code and the SavedModel attempt – Caife Sep 01 '21 at 06:22
  • i'm getting No module named 'samples' – harry Sep 01 '21 at 07:31
  • I also followed [this tutorial](https://thebinarynotes.com/how-to-train-mask-r-cnn-on-the-custom-dataset/) to help with the Matterport implementation - so my code is inside the directory ...Mask_RCNN/samples/custom. You may want to edit the import statement to reflect your directory structure, pointing to CustomConfig() of custom.py - hope that clarifies – Caife Sep 01 '21 at 07:48
1

this works for me

via anaconda powershell console (execute as admin) :

pip install tf2onnx

pip install onnxmltools

and in a notebook (for example)

from tensorflow.python.keras.models import load_model
import os
os.environ['TF_KERAS'] = '1'
import onnxmltools


model = load_model('[h5 path]')
onnx_model = onnxmltools.convert_keras(model) 

onnxmltools.utils.save_model(onnx_model, '[onnx path]')
Nando
  • 102
  • 5