0

I'm in progress converting the MBART model from HuggingFace Transformer to the OpenVino IF format and I've "successfully" brokendown the original pytorch model graph into 3 seperate ONNX models. I then used the mo.py to convert the onnx model to the IF format model to use the inference engine in openvino for "MYRIAD" Neural Compute Stick 2 . Its essentially two encoders and one decoder. I'm trying to test the first encoder model to see if a simple inference engine load works.

I'm geting the following error:

[ INFO ] Loading Inference Engine
[ INFO ] Loading network:
[ INFO ]     c:\protoc\models\translator\encoder\model.xml
[ INFO ] Device info:
[ INFO ]         MYRIAD
[ INFO ]         MKLDNNPlugin version ......... 2.1
[ INFO ]         Build ........... 2021.3.0-2774-d6ebaa2cd8e-refs/pull/4731/head
[ INFO ] Inputs number: 2
[ INFO ]     Input name: attention_mask
[ INFO ]     Input shape: [1, 92]
....

RuntimeError: Failed to compile layer "Div_25": Unsupported combination of indices in layer "Div_25". Only accross channel and full batch supported.

I check the Div_25 layer and it looks like:


<layer id="30" name="Div_25" type="MVN" version="opset6">
            <data eps="9.999999747378752e-06" eps_mode="inside_sqrt" normalize_variance="true"/>
            <input>
                <port id="0">
                    <dim>1</dim>
                    <dim>92</dim>
                    <dim>1024</dim>
                </port>
                <port id="1">
                    <dim>1</dim>
                </port>
            </input>
            <output>
                <port id="2" precision="FP32" names="231">
                    <dim>1</dim>
                    <dim>92</dim>
                    <dim>1024</dim>
                </port>
            </output>
        </layer>

Reading the MVN documentation, and tried putting various values for port id="1" under dim with not luck. I unfortunately don't fully understand what its warning me about to fix it.

Hovanessb
  • 103
  • 9

2 Answers2

0

It seems that the ONNX model that you use with the Div_25 layer is not yet supported by OpenVINO. You may refer to these documents on the Supported Framework layer.

Rommel_Intel
  • 1,369
  • 1
  • 4
  • 8
0

Please try using the MVN-1 operation as it would cater to the across channels flag. We suggest you use OpenVINO 2021.4, which has been released recently.

Rommel_Intel
  • 1,369
  • 1
  • 4
  • 8