ONNX Runtime is a cross-platform inference and training machine-learning accelerator.
Questions tagged [onnxruntime]
292 questions
0
votes
0 answers
Issue with ONNX Runtime dynamic axes for output shape
I'm working on a small project in which I trained a Neural Network for binary image classification. I got the training done and now I just want to create a GUI for it. Since I'm working with PyTorch and I don't want to install it as all I need is to…

Chema
- 11
- 7
0
votes
0 answers
How To Extract Elements from A Tensor While Using ONNX Runtime C++
While I use Python onnxruntime to run a model, I get the result and extract what I need from it, like this:
y = session.run(None, inputs) # The shape of y is [1, m, n, 2]
scores1 = y[0, :, :, 0]
scores2 = y[0, :, :, 1]
Note the output shape is…

Augustus Chen
- 11
- 1
0
votes
0 answers
Loading Onnx runtime optimized model in Triton - Error Unrecognized attribute: mask_filter_value for operator Attention
I converted my model into Onnx and then onnxruntime transformer optimization step is also done. Model is successfully loading and logits values are being matched with the native model as well. I moved this model to Triton server but facing following…

Hammad Hassan
- 1,192
- 17
- 29
0
votes
0 answers
How to get float* from Tensor on Microsoft.ML.OnnxRuntime?
I'm using nuget package Microsoft.ML.OnnxRuntime to inference yolov7 model, use c# .net framework 4.8
After session.run, I have a Tnesor as result, then I need to do some postprocessing,
iterate over the Tnesor,but getting elements…

BloodAndCat
- 1
- 3
0
votes
0 answers
Minimalist tool to run inference using ONNX models in C++?
I have a trained ONNX model, and I would like to run inference using this model in a different environment (C++/Linux/CPU-only). I am looking for the most minimal implementation that allows this.
I do not have root privileges and I can only use a…

dan
- 1
0
votes
0 answers
api-ms-win-core-heap-l2-1-0.dll missing on windows server 2012 R2
The application is developed with VC++2022 on Windows-11 using onnxruntime-win-x64-1.14.1.
When the application is deployed on Windows Server 2012 R2, it runs with error:
api-ms-win-core-heap-l2-1-0.dll missing
Running dependency walker on Windows…

user1633272
- 2,007
- 5
- 25
- 48
0
votes
1 answer
catboost model can not be read anymore in onnxruntime
I am new to using onnxruntime and am using my friend's old code to evaluate some data on the torch and catboost binary classification models in C++. The code was working nicely with onnxruntime v1.6.0 but when I updated it to v1.14.0, the catboost…

gasar8
- 306
- 4
- 12
0
votes
1 answer
How to load an image as an input to an onnx model in C++ using DirectML provider
I have been trying to understand how to load an onnx model in C++ using visual studio and provide input to it and see how and what the output of onnx model, But I dont find any way or how to load an input to onnx model
This is the latest…

Ron
- 57
- 1
- 6
0
votes
0 answers
Onnxruntime binding.get_output() returns OrtValue objects and not actual arrays
In following the API from https://onnxruntime.ai/docs/api/python/api_summary.html, the section on running data on device states that "Users can use the get_outputs() API to get access to the OrtValue (s) corresponding to the allocated output(s).…

JOKKINATOR
- 356
- 1
- 11
0
votes
0 answers
Input list is zero for ONNX model
I have exported my model from pytorch to ONNX but I am getting empty input list on running this code
input_names = [input.name for input in onnx_model.graph.input]
My model is
TaggingAgent(
(_encoder): BiGraphEncoder(
(_utt_encoder):…

hemant mishra
- 51
- 5
0
votes
0 answers
ONNXruntime doesn't work with CLion/CMake on windows
I am using CMake and CLion on windows, with the MinGW compiler. I have been trying to add the onnxruntime library to my project.
I have installed the windows x64 1.14.1 release from the onnxruntime github and unpacked it. In my root CMakeLists.txt,…

AAce3
- 136
- 1
- 2
0
votes
0 answers
My android app doesn't use gpu with nnapi running onnx model
I'm trying to run my onnx model with gpu by using nnapi in android environment. I'm trying with this code.
/*part of MainActivity.kt*/
/*=====================================================================*/
val modelID =…
0
votes
1 answer
Pre-allocating dynamic shaped tensor memory for ONNX runtime inference?
I am currently trying out onnxruntime-gpu and I wish to perform pre-processing of images on the GPU using NVIDIA DALI. Everything works correctly and I am able to pre-process my images, but the problem is that I wish to keep all of the data on…

JOKKINATOR
- 356
- 1
- 11
0
votes
0 answers
Optimizing Sentence Transformer models using HuggingFace Optimum
I am looking to optimize some of the sentence transformer models from huggingface using optimum library. I am following the below documentation:
https://huggingface.co/blog/optimum-inference
I understand the process but I am not able to use model_id…

satish silveri
- 358
- 3
- 17
0
votes
0 answers
Onnx batch prediction slower than sequential prediction
I have an ENet model that performs an image segmentation. I trained the model in Tensorflow, converted it to .onnx and I'm running an GPU inference with CUDA and OnnxRuntime in C# .NET6 win application. I would like to predict 16 images (512x512x3)…

Michal Cicatka
- 55
- 1
- 7