0

An ONNX model was supplied to me. I wrote code to inference it via a C++ 64-bit (x64) test program. Works great. But I also need this same model to run in a C++ 32-bit (x86) program -- cannot get it to run there! I can 'load' the model, I can query and learn its internal names for inputs and outputs and their shapes, but when I try to RUN the model, it crashes. In a related question: I set up the "logging level" when I create the environment, but I have no idea where log files are going (if indeed they are being generated). But anyway, my actual question is in the question title. That is, when the system generating the ONNX model (PyTorch in this case) is configured, must you indicate whether you want an ONNX that will play in 32-bits or 64-bits? Or should the same ONNX model work in both environments? ORT supplies DLLs and Libs for both 32 and 64 bit, so clearly ONNX can work in both environments - but is a different ONNX model needed for each?

Tullhead
  • 565
  • 2
  • 7
  • 17

1 Answers1

0

The same ONNX model can be used for both x64 and x86 architecture. Just make sure the relevant DLLs are used correctly.

ZWang
  • 832
  • 5
  • 14