5

After successful converting my model detectron2 model to ONNX format I cant make predictions.

I am getting the following error:

failed: Fatal error: AliasWithName is not a registered function/op

My code:

import onnx

import onnxruntime as ort
import numpy as np
import glob 
import cv2
onnx_model = onnx.load("test.onnx")

onnx.checker.check_model(onnx_model)


im = cv2.imread('img.png')
print(im.shape)

ort_sess = ort.InferenceSession('test.onnx',providers=[ 'CPUExecutionProvider'])
outputs = ort_sess.run(None, {'input': im})
print(outputs)

I am doing something wrong? In documentation: https://detectron2.readthedocs.io/en/latest/modules/export.html#detectron2.export.Caffe2Tracer.export_onnx They say: "Export the model to ONNX format. Note that the exported model contains custom ops only available in caffe2, therefore it cannot be directly executed by another runtime (such as onnxruntime or TensorRT). Post-processing or transformation passes may be applied on the model to accommodate different runtimes, but we currently do not provide support for them."

What is that "Post-processing or transformation" that I should do?

Vitor Bento
  • 384
  • 4
  • 17
  • I guess you need to do define your own custom op a bit similar to what is documented in https://support.huawei.com/enterprise/en/doc/EDOC1100191776/546085bf/exporting-a-custom-operator or https://github.com/microsoft/onnxruntime-extensions/blob/main/tutorials/tf2onnx_custom_ops_tutorial.ipynb but I don't have a concrete example for a Detectron2 conversion – Greg7000 Sep 28 '22 at 18:51
  • A feel weeks ago i contat the facebook reserach and microsfit research to help me on it . And they create a issue about it. More information here : https://github.com/facebookresearch/detectron2/issues/4414#issuecomment-1238377946 . Unfortunacly it is not a complete solve problem yet. – Vitor Bento Sep 28 '22 at 19:33
  • Hi @VitorBento any chance of resolution ? – Naga kiran Nov 04 '22 at 15:13
  • 1
    Well it still a open problem. The best that u can do is try dw this branch of this link https://github.com/facebookresearch/detectron2/issues/4414#issuecomment-1238377946 Use this docker image: docker pull thiagocrepaldi/dlfs:vitor1 Maybe is posible run this branch with normal docker image of pytorch today, but in the time that i try this task, the microsoft team need make this docker image for me . So, open the folder detectron2\tests\onnx file: test_pytorch_onnx_onnxruntime.py and try understand and adapt the function test_coco_detection_faster_rcnn_R_50_FPN_3x to u – Vitor Bento Nov 04 '22 at 21:34
  • Even after finishing converting the model to ONNX the output is raw. You need to do the post processing that is done with detectron2 tools. I don't know how to do it in the ONNX model. At this stage I didn't find any help. @Nagakiran – Vitor Bento Nov 04 '22 at 21:36
  • Thanks @VitorBento It helps to save my most of my time. I will post you, if i find any, I am trying to work around torchscript inference. – Naga kiran Nov 06 '22 at 17:47

0 Answers0