I'd like to convert the hosted models TensorFlow-Lite hosted models mainly the mobilenets into ONNX format. So I'd like to try the quantized version of those hosted models and run them with onnx-runtime.
What would be the right procedure for converting those models to be consumed by onnx-runtime?