Questions tagged [onnx]

ONNX is an open format to represent deep learning models and enable interoperability between different frameworks.

ONNX

The Open Neural Network Exchange (ONNX) is an open-source artificial intelligence ecosystem. With ONNX, AI developers can more easily move models between state-of-the-art tools and choose the combination that is best for them. ONNX is widely supported and can be found in many frameworks, tools, and hardware. It is developed and supported by a community of partners.

Official resources

809 questions
3
votes
0 answers

RuntimeError: Exporting the operator grid_sampler to ONNX opset version 9 is not supported

I am trying to export a pytorch text detection model to onnx format. The model uses grid_sample in the code of one module . I am unable to convert it to the onnx format because of the following…
Anonymous
  • 31
  • 1
  • 3
3
votes
1 answer

How to convert a tflite model into a frozen graph (.pb) in Tensorflow?

I would like to convert an integer quantized tflite model into a frozen graph (.pb) in Tensorflow. I read through and tried many solutions on StackOverflow and none of them worked. Specifically, toco didn't work (output_format cannot be…
nikolai_ye
  • 51
  • 2
3
votes
0 answers

Converting caffe model to ONNX format - problem with coremltools

I wanted to convert my face detection model written in caffe (https://github.com/adelekuzmiakova/onnx-converter/blob/master/res10_300x300_ssd_iter_140000.caffemodel) to ONNX format. I was following this tutorial:…
3
votes
1 answer

Object Detection Model (PyTorch) to ONNX:empty output by ONNX inference

I try to convert my PyTorch object detection model (Faster R-CNN) to ONNX. I have two setups. The first one is working correctly but I want to use the second one for deployment reasons. The difference lies in the example image which I use for the…
Tom
  • 91
  • 7
3
votes
0 answers

Can't convert Core ML model to Onnx (then to Tensorflow Lite)

I'm trying to convert a trained Core ML model to TensorFlow Lite. I find I need convert it to Onnx first. The problems is that I get errors. I've tried with different versions of python, onnxmltools, winmltools and it doesn't seems to work. I also…
aof
  • 31
  • 1
3
votes
0 answers

Create bounding box from ONNX model output

Hello everyone i am using an ONNX object detection model for detecting items in a picture. I implemented the code by following steps on ONNX js Github repository (https://github.com/microsoft/onnxjs). The code is working superb but i don't know how…
Aakash Bhadana
  • 302
  • 2
  • 17
3
votes
0 answers

Channels dimension index in the input shape while porting Pytorch models to Tensorflow

One of the major problems I've encountered when converting PyTorch models to TensorFlow through ONNX, is slowness, which appears to be related to the input shape, even though I was able to get bit-exact outputs with the two frameworks. While the…
SomethingSomething
  • 11,491
  • 17
  • 68
  • 126
3
votes
3 answers

Quantization of Onnx model

I am trying to quantize an ONNX model using the onnxruntime quantization tool. My code is below for quantization: import onnx from quantize import quantize, QuantizationMode # Load the onnx model model =…
Parag Jain
  • 612
  • 2
  • 14
  • 31
3
votes
2 answers

cross-compile c++-File: File format not recognized

I try to crosscompile a c++-code for an arm-processor from a linux-ubuntu-vm. A normal compilation works without errors. When I try the following command I've got an error: arm-linux-gnueabihf-g++ main.cpp onnx.proto3.pb.cc -o readonnx pkg-config…
USER9123
  • 61
  • 2
  • 6
3
votes
2 answers

How to load base onnx model in ArmNN for Linux in C++

I am trying to create a C++ standalone app based on ArmNN that operates on ONNX models. To start with I have downloaded a few standard models for testing, and while trying to load the model I see a crash saying "Tensor numDimensions must be greater…
Rahul Chowdhury
  • 173
  • 2
  • 11
3
votes
1 answer

RuntimeError: Only tuples, lists and Variables supported as JIT inputs, but got NoneType

My code is a=torch.randn(1,80,100,requires_grad=True) torch.onnx.export(waveglow,a, "waveglow.onnx") I am trying to export a PyTorch model to ONNX format so i can use it in TensorRT. while testing my model in PyTorch the input tensor dimension is…
user10223107
3
votes
1 answer

Trouble using openCV to load a net from ONNX (python/pytorch)

I'm trying to load a trained .onnx model (from a neural-style-transfer algorithm) into cv2. I've seen that there is a cv.dnn.readNetFromONNX() function, but there is no such function in cv2. I can't seem to import or load opencv as cv, and as…
MvR
  • 111
  • 1
  • 1
  • 6
3
votes
1 answer

ImportError: No module named 'onnx_backend'?

I have installed ONNx form this URL https://github.com/onnx/onnx , now trying to run some models form here https://github.com/onnx/models#face_detection , the problem is that when importing: import numpy as np import onnx It works but when I try to…
Rajnish Kumar
  • 2,828
  • 5
  • 25
  • 39
3
votes
3 answers

How to read individual layers' weight & bias values from ONNX model?

How to get weight/bias matrix values from ONNX model, I can currently get the inputs, Kernel size, stride and pad values from model.onnx. I load the model and then read the graph nodes to get the same: import onnx m =…
25b3nk
  • 164
  • 2
  • 16
2
votes
0 answers

How to use TopK operator in ONNX?

I am writing a converter and calculator to convert my custom sklearn transformers into ONNX models. I need to calculate the median of my data points. Interesting point - ONNX has no function to calculate the median (at least I didn't find anything…
paradocslover
  • 2,932
  • 3
  • 18
  • 44