Questions tagged [onnxruntime]

ONNX Runtime is a cross-platform inference and training machine-learning accelerator.

See onnxruntime github project.

292 questions
0
votes
0 answers

running code in two gpus consume more time, when run independently, as compared to running in a single one

I am trying to inference model with the best possible speed. While testing, I found that one inference take 34 milliseconds ( on average), when run on one GPU. And one inference ( on average) take 40 milliseconds, when requested to two GPUs,…
0
votes
1 answer

onnxruntime-web A tensor's dims must be a number array

I want to replicate this working MNIST inference example which uses the deprecated onnxjs library in favor of using the new onnxruntime, but I'm getting an error while creating a Tensor from the canvas image: const imgData = this.ctx.getImageData(0,…
Bilal
  • 3,191
  • 4
  • 21
  • 49
0
votes
1 answer

onnxruntime-node in packaged electron app

I'm using the onnxruntime-node package to do inferencing in my electron app. I installed the npm package via 'yarn add', and everything works as expected in development. When I package the electron app, node isn't able to find the onnxruntime-node…
0
votes
1 answer

Problems converting deepLearning4j array to ONNX Tensor

I'm currently reading an image with deepLearning4j and trying to pass it to ONNX is giving me problems. INDArray ndArray = loader.asMatrix(mat3).permute(0,2,3,1); OnnxTensor tensor = OnnxTensor.createTensor(env, ndArray.data().asNioFloat(),…
IgnacioPL
  • 67
  • 1
  • 6
0
votes
1 answer

Onnx model converted with tf2onnx runs on CPU only in python

I'm trying to use the model from the repository (on google drive) with onnx instead of tensorflow. I converted it with: python3 -m tf2onnx.convert --graphdef mars-small128.pb --output mars-small128_nchw.onnx --inputs-as-nchw "images:0" --inputs…
Dmitry
  • 160
  • 9
0
votes
0 answers

Why ONNX model results differ when inferred from Python and from C++?

Designed ONNX Model for Iris dataset, where ONNX model input is array of float numbers and output is label. While inferring this ONNX model in C++ using 'Microsoft.ML.OnnxRuntime\1.13.1\runtimes', for same set of input, output label is different WRT…
0
votes
0 answers

Difference in Output between Pytorch and ONNX model

I converted the transformer model in Pytorch to ONNX format and when i compared the output it is not correct. I use the following script to check the output precision: output_check = np.allclose(model_emb.data.cpu().numpy(),onnx_model_emb,…
0
votes
0 answers

How to bind a onnx dynamic output in C++/WinRT using LearningModelBinding?

I have a onnx model of detectronV2 that has outputs that are not of a fixed size, they are dynamic. I was able to do inference in python with the onnxruntime: import onnxruntime # Initialize session and get prediction model_path =…
0
votes
1 answer

How to export Pytorch model to ONNX with variable-length tensor loop?

I simplify my complex Pytoch model like belows. import torch from torch import nn import onnx import onnxruntime import numpy as np class Model(nn.Module): def __init__(self): super(Model, self).__init__() self.template =…
sunny
  • 213
  • 1
  • 2
  • 9
0
votes
0 answers

tensor flow saved_model removes signatures

when I export my model in saved_model format, the input and output names are getting lost resp. the signature. When I save it in .h5 format everything is fine. Later I want to use the saved model to be loaded and converted via…
Zeit
  • 13
  • 5
0
votes
0 answers

onnx convert from torch input 2 is not found

I was trying to export the following to onnx. I saw that the forward method has rois. I would expect rois to be part of my input in onnx. From here class SegmentationModel(nn.Module): def __init__(self, encoder='rtnet50', decoder='fcn',…
JR Wink
  • 107
  • 7
0
votes
0 answers

KeyError: 'pyspark.ml.regression.IsotonicRegressionModel' when trying to convert Pyspark model to ONNX format

I am converting a pyspark trained model to ONNX format. More specifically, it's the pyspark.ml.regression.IsotonicRegressionModel. I get an error saying KeyError: 'pyspark.ml.regression.IsotonicRegressionModel'. Is there a way I can resolve this…
Gingerbread
  • 1,938
  • 8
  • 22
  • 36
0
votes
0 answers

about ONNX use on Javascript

I have an ONNX model and I want to use it on html and JavaScript. And I use to load ONNX. However, there is a maxpool layer in my model that have to use ai.onnx v10. Uncaught (in promise) TypeError: cannot resolve operator 'MaxPool' with opsets:…
0
votes
0 answers

Why does onnx model give floating point exception on GPU and not on CPU?

I have an onnx model created out of a pytorch based model. When I run it on a black box system using: output = onnxruntime.InferenceSession.run() #method to indicate what I use I create InferenceSession using CUDAExecutionProvider I am seeing the…
Prasanjit Rath
  • 166
  • 2
  • 13
0
votes
0 answers

Converting a TensorFlow hub saved .pb model into Onnx

Hello I have a multilingual transformer model from TensorFlow hub that I want to convert into an ONNX model: (MODEL) I have tried tf2onnx convert many times and wansn't successful. Model Signature Def: signature_def['__saved_model_init_op']: …