2

I would like to know if it is possible to use openvino in a .net application?

I have converted a yolo network to a onnx network to use with ml.net. What I would like to do next is to implement openvino to see if it speeds up. So far I have converted my onnx model with openvino Model_optimizer but so far could not find any way to implement openvino in a .net app.

Thank you

sharkyenergy
  • 3,842
  • 10
  • 46
  • 97

2 Answers2

0

Up till this moment, there's no official support for OpenVINO integration with .NET application. Instead, the OpenVINO has its own Inference Engine application that supports both C++ and Python. You may refer here for more info.

Performance-wise, since you mention speeding things up, you could try to use OpenVINO Post-Training Optimization Tool to accelerate the inference of deep learning models.

Plus, ensure to choose the right precision for Deep Learning model according to the hardware you are going to use for inferencing.

Rommel_Intel
  • 1,369
  • 1
  • 4
  • 8
  • Thank you, it is my understanding that if i optimize it with the optimization tool I also have to do inference with its own inference engine.. is that correct? – sharkyenergy Mar 02 '22 at 09:02
  • @sharkyenergy yes, you need to use OV's InferenceEngine API. A very good python example is here https://github.com/openvinotoolkit/openvino_notebooks/blob/main/notebooks/002-openvino-api/002-openvino-api.ipynb The python API is aligned with C++ which means you need to do something very similar if you're planning to create a cpp application. – tomdol Mar 02 '22 at 22:18
0

As of now there is onnx runtime where you can run your inference in c# and train the model in python and convert into onnx model (model.onnx) link for reference: https://onnxruntime.ai/docs/get-started/with-csharp.html

rohin
  • 31
  • 2