Questions tagged [onnxruntime]

ONNX Runtime is a cross-platform inference and training machine-learning accelerator.

See onnxruntime github project.

292 questions
2
votes
1 answer

ONNX model inference produces different results for the same input

I'm testing the ONNX model with one identical input for multiple inference calls, but it produces different results every time? For details, please refer to the below Colab…
Hank
  • 21
  • 2
2
votes
1 answer

Error in loading ONNX model with ONNXRuntime

I'm converting a customized Pytorch model to ONNX. However, when loading it with ONNXRuntime, I've encountered an error as follows: onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Exception…
nguyendhn
  • 423
  • 1
  • 6
  • 19
2
votes
0 answers

inference and comparing faces time in real time face recognition applications

I can do face recognition in real time using the Python insightface package and onnx pre-trained models. (https://github.com/deepinsight/insightface/tree/master/python-package) I really face a lot of questions and challenges if you please help me. I…
2
votes
1 answer

ONNX C# :How Do I go about reading this object and extracting the probability value?

I've saved an ONNX-converted pretrained RFC model and I'm trying to use it in my API. I am able to call the saved model and make my prediction however I can't get my predicted value. The response is very complicated and I can't seem to figure it…
confusedstudent
  • 353
  • 3
  • 11
2
votes
1 answer

Optimization of conversion from opencv mat/Array to to OnnxRuntime Tensor?

I am using the ONNXRuntime to inference a UNet model and as a part of preprocessing I have to convert an EMGU OpenCV matrix to OnnxRuntime.Tensor. I achieved it using two nested for loops which is unfortunately quite slow: var data = new…
2
votes
2 answers

Onnxruntime vs PyTorch

I have trained YOLO-v3 tiny on my custom dataset using PyTorch. For comparing the inferencing time, I tried onnxruntime on CPU along with PyTorch GPU and PyTorch CPU. The average running times are around: onnxruntime cpu: 110 ms - CPU usage:…
2
votes
0 answers

How to perform Batch inferencing with RoBERTa ONNX quantized model?

I have converted RoBERTa PyTorch model to ONNX model and quantized it. I am able to get the scores from ONNX model for single input data point (each sentence). I want to understand how to get batch predictions using ONNX Runtime inference session by…
2
votes
2 answers

Can I combine two ONNX graphs together, passing the output from one as input to another?

I have a model, exported from pytorch, I'll call main_model.onnx. It has an input node I'll call main_input that expects a list of integers. I can load this in onnxruntime and send a list of ints and it works great. I made another ONNX model I'll…
maccam912
  • 792
  • 1
  • 7
  • 22
2
votes
2 answers

How to do multiple inferencing on onnx(onnxruntime) similar to sklearn

I want to infer outputs against many inputs from an onnx model using onnxruntime in python. One way is to use the for loop but it seems a very trivial and a slow method. Is there a way to do the same way as sklearn? Single prediction on…
Sarthak Agrawal
  • 321
  • 4
  • 17
2
votes
1 answer

Why cant I use ONNX Runtime training with pytorch?

When I run from onnxruntime.capi.ort_trainer import ORTTrainer as stated at https://github.com/microsoft/onnxruntime/#training-start, I get this error: ModuleNotFoundError: No module named 'onnxruntime.capi.ort_trainer' What can I do to fix this? I…
pgmcr
  • 79
  • 7
2
votes
3 answers

Inference of onnx model (opset11) in Windows 10 c++?

In order to use my custom TF model through WinML, I converted it to onnx using the tf2onnx converter. The conversion finally worked using opset 11. Unfortunately I cannot load the model in the WinRT c++ library, therefore I am confused about the…
2
votes
1 answer

Onnx-to-keras and Keras2onnx alter ONNX model input layers to Nx1x200x200 instead of the original 1x1x200x200

Currently, I am trying to import an ONNX model to Keras in order to run training on datasets of grayscale images of size 1x1x200x200. However, when I convert my onnx model to Keras using onnx-to-keras() the model's input layer is changed to…
ApluUAlberta
  • 105
  • 1
  • 9
2
votes
2 answers

How do you run a half float ONNX model using ONNXRuntime C API?

Since the C language doesn't have a half float implementation, how do you send data to the ONNXRuntime C API?
katrasnikj
  • 3,151
  • 3
  • 16
  • 27
2
votes
0 answers

How to convert pytorch tensor to onnx tensor in custom layer?

I'm trying to make onnx realization of pytorch block. I made custom block with custom forward function. class MyConvBlockFunction(Function): @staticmethod def symbolic(g, input, conv1): from torch.onnx.symbolic_opset9 import…
Vasilyev Eugene
  • 117
  • 1
  • 7
2
votes
0 answers

ML.Net: ONNX Model with multiple outputs - bad inference time

I want to do inference on an Onnx model which has one input tensor and multiple output tensors (with different dimensions) with ML.Net and onnxruntime. I used .GetColumn to get the desired output. In order to get all outputs I tried two different…