Questions tagged [onnxruntime]

ONNX Runtime is a cross-platform inference and training machine-learning accelerator.

See onnxruntime github project.

292 questions
0
votes
1 answer

ONNX object from PyTorch model without exporting

Is it possible to convert the PyTorch Model to ONNX without exporting and further use it as an ONNX object directly in the script.
0
votes
1 answer

How can I translate Python onnxruntime code to Rust tract_onnx?

I'm trying to translate a Python script using onnxruntime to Rust using tract_onnx. The specific POC I'm trying to implement is the rothe_vgg.py script from the ONNX Model Zoo. This script uses three models: ultraface face detection…
user655321
  • 1,572
  • 2
  • 16
  • 33
0
votes
1 answer

onnx model in ubuntu 18.04 not running

I am trying to implement custom vision solution using C#, Azure custom vision, and ONNX Model. My API code is running perfect on windows OS, but when I am trying to run same code on Ubuntu 18.04, getting below error. I have download trained ONNX…
0
votes
1 answer

onnxruntime with openvino mix build happen an error "Failed to load library"

my environment is windows,i want to use python to infernce with onnxruntime with openvion.after installing openvino,i build onnxruntime with openvino,my build command is .\build.bat --update --build --build_shared_lib --build_wheel --config…
wwbnjs
  • 21
  • 5
0
votes
0 answers

Can Python statsmodels saved model file be converted to ONNX Runtime format?

I was trying to a SARIMA model from python statsmodels as a pickle file. I wanted to convert the saved time series model into ONNX runtime format for real time edge inference. But I did not find any ways to do so? Is it possible to convert the saved…
Aditya Bhattacharya
  • 914
  • 2
  • 9
  • 22
0
votes
2 answers

Getting error while importing onnxruntime ImportError: cannot import name 'get_all_providers' (Windows 10)

I have installed onnxruntime-gpu library in my environment pip install onnxruntime-gpu==1.2.0 nvcc --version output Cuda compilation tools, release 10.1, V10.1.105 >>> import…
Abhishek Gangwar
  • 1,697
  • 3
  • 17
  • 29
0
votes
1 answer

How to most efficiently feed one ONNX model's output into another in java?

I have two models built with ONNX, model A and model B. I use the ONNX Runtime Java API to load these models and make inferences with them. The workflow is that I need to compute a prediction with model A and then feed the result from model A into…
Joe
  • 418
  • 4
  • 12
0
votes
1 answer

WinML to any Onnxruntime EP

Is it possible to call any ONNX Runtime Execution provider through WinML API? I'm able to run custom DML through WinML but unable to find steps for WinML to onnxruntime EP.
ms2579
  • 11
  • 2
0
votes
1 answer

Can an ONNX network be incompatible with onnxruntime?

I am having trouble running inference on an ONNX model, either by making (tiny) adjustments to this Windows ML tutorial, or by implementing my own ONNX Runtime code following their MNIST Tutorial. As I understand it, Windows ML makes use of ONNX…
omatai
  • 3,448
  • 5
  • 47
  • 74
0
votes
1 answer

Yolov4 onnxruntime C++

I need to deploy a yolov4 inference model and I want to use onnxruntime with tensorRT backend. I don't know how to post process yolov4 detection result in C++. I have a sample written in python but I can not find C++…
Kylo_Entro
  • 46
  • 5
0
votes
1 answer

Memory corruption when using OnnxRuntime with OpenVINO on the Intel MyriadX and Raspberry Pi 4B

I'm trying to run Inference on the Intel Compute Stick 2 (MyriadX chip) connected to a Raspberry Pi 4B using OnnxRuntime and OpenVINO. I have everything set up, the openvino provider gets recognized by onnxruntime and I can see the myriad in the…
perivesta
  • 3,417
  • 1
  • 10
  • 25
0
votes
1 answer

ONNX runtime is throwing TypeError when loading an onnx model

I have converted a savedModel format to onnx model but when loading it via onnxruntime import onnxruntime as rt sess = rt.InferenceSession('model.onnx') It throws me the below error: onnxruntime.capi.onnxruntime_pybind11_state.InvalidGraph:…
Nikhil
  • 65
  • 1
  • 8
0
votes
1 answer

How to get correct GPU device id for Microsoft.ML.OnnxRuntime.DirectML (.net core 3.1)?

I am using Microsoft.ML.OnnxRuntime.DirectML nuget package for image classification like this: var options = new SessionOptions(); options.AppendExecutionProvider_DML( 1 ); // deviceId goes here var session = new InferenceSession(…
Omni
  • 31
  • 4
0
votes
1 answer

onnxruntime Package installation in Python Plugin from Azure Data Explorer Fails

I want to install the onnxruntime package using the python plugin from the Azure Data Explorer. I followed the instructions from this site…
Torb
  • 259
  • 2
  • 12
0
votes
1 answer

What does the Onnx runtime error "Classes different from first n integers are not supported in SVC converter" mean?

I am trying to convert / store a sklearn SVC model as a .onnx file and I am getting a runtime error I do not understand. I have been able to use this same code effectively without error with a sklearn random forest classifier and a sklearn k-NN…