0

I am looking for a similar feature as keras custom layer in ONNX/Onnxruntime. The way I understand to solve this is to implement a custom operator in onnx for experimentation. The documentation seems to be pointing to implementation in C++ as a shared library and use it in python. https://onnxruntime.ai/docs/reference/operators/add-custom-op.html

Is there a method to define custom op in python for onnx just for experimental purpose and use it for inferencing ? I tried following this but gives 'error: PyOp is not a registered function/op' https://onnxruntime.ai/docs/reference/operators/custom-python-operator.html

Python Code:

import onnx
import onnxruntime as ort

A = onnx.helper.make_tensor_value_info('A', onnx.TensorProto.FLOAT, [4])
B = onnx.helper.make_tensor_value_info('B', onnx.TensorProto.FLOAT, [4])
C = onnx.helper.make_tensor_value_info('C', onnx.TensorProto.FLOAT, [4])
D = onnx.helper.make_tensor_value_info('D', onnx.TensorProto.FLOAT, [4])
E = onnx.helper.make_tensor_value_info('E', onnx.TensorProto.FLOAT, [4])
F = onnx.helper.make_tensor_value_info('F', onnx.TensorProto.FLOAT, [4])

ad1_node = onnx.helper.make_node('Add', ['A', 'B'], ['S'])

mul_node = onnx.helper.make_node('Mul', ['C','D'], ['P'])

ad2_node = onnx.helper.make_node('Add', ['S', 'P'], ['H'])

py1_node = onnx.helper.make_node(op_type = 'PyOp', #required, must be 'PyOp'
                            inputs = ['H'], #required
                            outputs = ['F'], #required
                            domain = 'pyopadd_2', #required, must be unique
                            input_types = [onnx.TensorProto.FLOAT], #required
                            output_types = [onnx.TensorProto.FLOAT], #required
                            module = 'mymodule', #required
                            class_name = 'Add_2', #required
                            compute = 'compute') #optional, 'compute' by default

graph = onnx.helper.make_graph([ad1_node,mul_node,ad2_node, py1_node], 'multi_pyop_graph', [A,B,C,D], [F])
model = onnx.helper.make_model(graph, opset_imports=[onnx.helper.make_opsetid('pyopadd_2', 1)], producer_name = 'pyop_model')
onnx.save(model, './modeltemp.onnx')

ort_session = ort.InferenceSession('./modeltemp.onnx')
ort_output = ort_session.run(["F"], {'A':[1,2,3,4], 'B':[1,1,1,1], 'C':[2,2,2,2], 'D':[3,3,3,3]})
print(ort_output)

mymodule.py

class Add_2:
    def compute(self, S):
        return S+2
hmedu
  • 13
  • 5

2 Answers2

2

This is not how you use the PyOp. First: You need to implement the operator that you try to use in python. Second: You need to register the operator you have implemented in the ONNXRuntime session. Third: You run the inference of the model that contains the custom ops.

One example can be found here https://github.com/microsoft/onnxruntime-extensions/blob/main/tutorials/tf2onnx_custom_ops_tutorial.ipynb, you have to look at the section called : "Implementing the op in python". WARNING! PyOp in this notebook is called PyCustomOpDef You don't need to worry about anything else in this notebook, if you start directly with the ONNX Model. Only the fact that you need to include the "ai.onnx.contrib" in the opset of the model and to set the same domain "ai.onnx.contrib" in the domain of the nodes.

# for the model
DOMAIN = "ai.onnx.contrib"
VERSION = 1 # try 2 or 3, I had some issues with the versioning
new_opset = onnx.helper.make_opsetid(DOMAIN, VERSION)
loaded_model.opset_import.append(new_opset)

# for the node, like in your code 
domain = 'ai.onnx.contrib', #required, must be unique
petre Trusca
  • 21
  • 1
  • 1
0

As the custom-python-operator clearly said, you must build the onnxruntime by yourself with:

--config Release --enable_language_interop_ops --build_wheel

The functionality does not come with the prebuilt versions of onnxruntime.

fefe
  • 3,342
  • 2
  • 23
  • 45