Questions tagged [onnx]

ONNX is an open format to represent deep learning models and enable interoperability between different frameworks.

ONNX

The Open Neural Network Exchange (ONNX) is an open-source artificial intelligence ecosystem. With ONNX, AI developers can more easily move models between state-of-the-art tools and choose the combination that is best for them. ONNX is widely supported and can be found in many frameworks, tools, and hardware. It is developed and supported by a community of partners.

Official resources

809 questions
3
votes
1 answer

rembg throws RuntimeError: LoadLibrary failed: onnxruntime_providers_tensorrt.dll

I am not able to generate the image whose background is removed from rembg import remove from PIL import Image input_path = "crop.jpeg" output_path = 'crop1.png' input = Image.open(input_path) output = remove(input) output.save(output_path) I…
3
votes
0 answers

pytorch export to onnx - is there a mapping between the pytorch layer names and onnx layer names?

I am exporting float pytorch model to onnx. In addition I quantize the float pytorch model in native pytorch quantization. My problem is that I need a map between observers names (which come from the pytorch model's layers names) and onnx layers…
3
votes
1 answer

Equivalent of predict_proba of scikit-learn for ONNX C++ API

I have trained a classification model and I use that the ONNX format of that model in C++ to predict value as follow: auto inputOnnxTensor = Ort::Value::CreateTensor(memoryInfo, inputValues.data(), inputValues.size(), inputDims.data(),…
3
votes
2 answers

OpenCV::dnn::readNet throwing exception

I am following this tutorial to load the yolov5*.onnx models with the OpenCV DNN module and use it to make inference. I get the following error when trying to load the model: [ERROR:0@10.376] global…
usamazf
  • 3,195
  • 4
  • 22
  • 40
3
votes
2 answers

PyTorch to ONNX export, ATen operators not supported, onnxruntime hangs out

I want to export roberta-base based language model to ONNX format. The model uses ROBERTA embeddings and performs text classification task. from torch import nn import torch.onnx import onnx import onnxruntime import torch import transformers from…
Alexander Borochkin
  • 4,249
  • 7
  • 38
  • 53
3
votes
0 answers

Why does using folding give an error while exporting this model to onnx from pytorch?

I have the following model: class BertClassifier(nn.Module): """ Class defining the classifier model with a BERT encoder and a single fully connected classifier layer. """ def __init__(self, dropout=0.5, num_labels=24): …
Kroshtan
  • 637
  • 5
  • 17
3
votes
1 answer

TypeError: not a string | parameters in AutoTokenizer.from_pretrained()

Goal: Amend this Notebook to work with albert-base-v2 model. Kernel: conda_pytorch_p36. I did Restart & Run All, and refreshed file view in working directory. In order to evaluate and to export this Quantised model, I need to setup a…
3
votes
1 answer

HuggingFace AutoTokenizer | ValueError: Couldn't instantiate the backend tokenizer

Goal: Amend this Notebook to work with albert-base-v2 model Error occurs in Section 1.3. Kernel: conda_pytorch_p36. I did Restart & Run All, and refreshed file view in working directory. There are 3 listed ways this error can be caused. I'm not…
3
votes
0 answers

Input shape disparity with Onnx inference

Trying to do inference with Onnx and getting the following: The model expects input shape: ['unk__215', 180, 180, 3] The shape of the Image is: (1, 180, 180, 3) The code I'm running is: import onnxruntime as nxrun import numpy as np from…
NominalSystems
  • 175
  • 1
  • 4
  • 13
3
votes
1 answer

ONNX model checker fails while ONNX runtime works fine when `tf.function` is used to decorate memeber function with loop

When a tensorflow model contains tf.function decorated function with for loop in it, the tf->onnx conversion yields warnings: WARNING:tensorflow:From /Users/amit/Programs/lammps/kim/kliff/venv/lib/python3.7/site-packages/tf2onnx/tf_loader.py:706:…
ipcamit
  • 330
  • 3
  • 16
3
votes
1 answer

Upgrade ONNX model from version 9 to 11

I'm working with an ONNX model that I need to quantize in order to reduce its size, for that I'm following the instructions on the official documentation: import onnx from onnxruntime.quantization import quantize_dynamic, QuantType model_fp32 =…
3
votes
0 answers

How to Slice ONNXRuntime Tensor?

Assume that a Microsoft.ML.Onnxruntime.Tensors.Tensor variable has been created with dimensions [d1, d2, d3]. Is there a way to return a copy or view of a slice over certain dimensions? I wanted to do the equivalent of subset =…
premes
  • 363
  • 2
  • 8
3
votes
2 answers

Unable to convert .h5 model to ONNX for inferencing through any means

I built a custom model in .h5 from Matterport's MaskRCNN implementation. I managed to save the full model and not the weights alone using model.keras_model.save(), and assume it worked correctly. I need to convert this model to ONNX to inference in…
Caife
  • 415
  • 1
  • 5
  • 16
3
votes
1 answer

How to load a onnx model on tensorflow.js?

I'm creating a program using Tensorflow.js. It should receive an onnx file and be able to load it with tf, being able to make inferences. My problem is how to convert it from onnx to tfjs? I would rather solve it using just js. (so i can't use…
3
votes
1 answer

convert pytorch model with multiple networks to onnx

I am trying to convert pytorch model with multiple networks to ONNX, and encounter some problem. The git repo: https://github.com/InterDigitalInc/HRFAE The Trainer Class: class Trainer(nn.Module): def __init__(self, config): …
ZZ Shao
  • 83
  • 1
  • 1
  • 9