Questions tagged [onnxruntime]

ONNX Runtime is a cross-platform inference and training machine-learning accelerator.

See onnxruntime github project.

292 questions
1
vote
1 answer

Onnx Runtime Adding Multiple Initializers in Python

When trying to prepare the session options for onnx runtime I receive a onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException when trying to add more than one initializer at a time. See code import onnxruntime import numpy as np params =…
Ari
  • 563
  • 2
  • 17
1
vote
0 answers

can't build max/msp external with onnx-runtime, "LNK2019: unresolved external symbol..."

i am trying to write a max-external to do inference on an .onnx neural network on. i have followed several tutorials on certain steps but fail to combine them: i managed to create a c++ console app in vs that loads my .onnx model and runs…
1
vote
0 answers

Is onnx computational graph static or dynamic?

As mentioned in the onnxruntime documentation: Out of the box, ONNXRuntime applies a series of optimizations to the ONNX graph, combining nodes where possible and factoring out constant values (constant folding). My question is: Is the exported…
1
vote
0 answers

Dependency header not found for sub directory in cmake

I have the following project structure sandbox/ ├── CMakeLists.txt ├── external │ └── test_pkg │ ├── CMakeLists.txt │ ├── cmake │ │ └── FindOnnxruntime.cmake │ ├── include │ │ └── pkg │ │ └──…
mro47
  • 83
  • 5
1
vote
1 answer

onnxruntime C++ how to get outputTensor dynamic shape?

Try using something like this: std::vector ort_inputs; for (int i = 0; i < inputNames.size(); ++i) { ort_inputs.emplace_back(Ort::Value::CreateTensor( memoryInfo, static_cast(inputs[i].data),…
Nicholas Jela
  • 2,540
  • 7
  • 24
  • 40
1
vote
1 answer

How can one profile an ONNX model with random inputs, without having to specify the input shape?

I was given several ONNX models. I'd like to profile them. How can one profile them with random inputs, without having to specify the input shape? I'd prefer not to have to manually find out the input shape for each model and format my random inputs…
Franck Dernoncourt
  • 77,520
  • 72
  • 342
  • 501
1
vote
1 answer

How to use onnxruntime in a native android library

I need to use the onnxruntime library in an Android project, but I can't understand how to configure CMake to be able to use C++ headers and *.so from AAR. I created a new Android Native Library module and put onnxruntime-mobile-1.11.0.aar into libs…
1
vote
0 answers

the output of ncnn and onnx not same

i'm trying to convert an onnx model to ncnn model, i use the command ~/data/ncnn-master/build/tools/onnx/onnx2ncnn ./flip_sim.onnx flip_sim.param flip_sim.bin to get para and bin file when a input a whole white image,the result of onnxruntime and…
1
vote
1 answer

onnxruntime-gpu failing to find onnxruntime_providers_shared.dll when run from a pyinstaller-produced exe file of the project

Short: I run my model in pycharm and it works using the GPU by way of CUDAExecutionProvider. I create an exe file of my project using pyinstaller and it doesn't work anymore. Long & Detail: In my project I train a tensorflow model and convert it to…
1
vote
0 answers

How to check bus utilization / bus load for GPU during ML inference?

I am running an ML inference for image recognition on the GPU using onnxruntime and I am seeing an upper limit for how much performance improvement batching of images is giving me - there is reduction in inference time upto around batch_size of 8,…
sn710
  • 581
  • 5
  • 20
1
vote
1 answer

failed to inference ONNX model: TypeError: Cannot read properties of undefined (reading 'InferenceSession')

I tried to replicate the example found here: https://github.com/microsoft/onnxruntime-inference-examples/tree/main/js/quick-start_onnxruntime-web-bundler: import * as React from 'react'; import ort from 'onnxruntime-web' import regeneratorRuntime…
Raphael10
  • 2,508
  • 7
  • 22
  • 50
1
vote
1 answer

CUDA libcublas.so.11 Error when using GPUs inside ONNX Docker Container

While programming with python3.6 on DGX Station (NVIDIA) based on ONNX runtime environment; Using following libraries; mxnet==1.5.x onnxruntime-gpu==1.7.x I see following error OSError: libcublas.so.11: cannot open shared object file: No such file…
khawarizmi
  • 593
  • 5
  • 19
1
vote
1 answer

How to load local ort model in React Native app for Onnxruntime infernece

I am struggling to load a locally hosted Onnyxruntime model in ReactNative. Imports: import { Asset } from 'expo-asset'; import { InferenceSession } from "onnxruntime-react-native"; Here is what I am doing and the error this gives me. const…
Cyprian
  • 9,423
  • 4
  • 39
  • 73
1
vote
1 answer

ONNX Runtime Inference | session.run() multiprocessing

Goal: run Inference in parallel on multiple CPU cores I'm experimenting with Inference using simple_onnxruntime_inference.ipynb. Individually: outputs = session.run([output_name], {input_name: x}) Many: outputs = session.run(["output1", "output2"],…
1
vote
2 answers

ValueError: Unsupported ONNX opset version: 13

Goal: successfully run Notebook as is on Jupyter Labs. Section 2.1 throws a ValueError, I believe because of the version of PyTorch I'm using. PyTorch 1.7.1 Kernel conda_pytorch_latest_p36 Very similar SO post; the solution was to use the latest…
DanielBell99
  • 896
  • 5
  • 25
  • 57