I would like to convert an integer quantized tflite model into a frozen graph (.pb) in Tensorflow. I read through and tried many solutions on StackOverflow and none of them worked. Specifically, toco didn't work (output_format cannot be TENSORFLOW_GRAPHDEF).
My ultimate goal is to get a quantized ONNX model through tf2onnx, yet tf2onnx does not support tflite as input (only saved_model, checkpoint and graph_def are supported). However, after quantizing the trained model using TFLiteConverter, it only returns a tflite file. This is where the problem arises.
The ideal flow is essentially this: tf model in float32 -> tflite model in int8 -> graph_def -> onnx model. I am stuck at the second arrow.