I'm using PyTorch to train neural-net and output them into ONNX. I use these models in a Vespa index, which loads ONNXs through TensorRT. I need one-hot-encoding for some features but this is really hard to achieve within the Vespa framework.
Is it possible to embed a one-hot-encoding for some given features inside my ONNX net (e.g. before the network's representation) ? If so, how should I achieve this based on a PyTorch model ?
I already noticed two things:
- ONNX format includes the OneHot operator : see ONNX doc
- PyTorch built-in ONNX exporting system not not support OneHot operator : see torch.onnx doc
EDIT 2021/03/11: Here is my workflow:
- training learning-to-rank models via PyTorch
- exporting them as ONNX
- importing these ONNX into my Vespa index in order to rank any query's results thanks to the ONNX model. Under the hood, Vespa uses TensorRT for inference (so I use Vespa's ONNX model evaluation)