I have a trained ONNX model, and I would like to run inference using this model in a different environment (C++/Linux/CPU-only). I am looking for the most minimal implementation that allows this.
I do not have root privileges and I can only use a conda environment. Is there a tool other than ORT that can be installed with minimal effort and be used to run inferences? Inference speed is not a major bottleneck.