0
[2]  frozen_graph_file = # path to frozen graph (.pb file)
[3]  input_arrays = ["normalized_input_image_tensor"]
[4]  output_arrays = ['TFLite_Detection_PostProcess',
[5]            'TFLite_Detection_PostProcess:1',
[6]            'TFLite_Detection_PostProcess:2',
[7]            'TFLite_Detection_PostProcess:3']
[8]  input_shapes = {"normalized_input_image_tensor" : [1, 300, 300, 3]}
[9]
[10] converter = tf.lite.TFLiteConverter.from_frozen_graph(frozen_graph_file,
[11]                                                  input_arrays=input_arrays,
[12]                                                  output_arrays=output_arrays,
[13]                                                  input_shapes=input_shapes)
[14] converter.allow_custom_ops = True
[15] converter.optimizations = [tf.lite.Optimize.DEFAULT]
[16] tflite_quant_model = converter.convert()
[17] with open(tflite_model_quant_file, "wb") as tflite_file:
[18]     tflite_file.write(tflite_model_quant)

When quantizing a model, we usually fed the model with some calibration data to identify the range of activation, hence define the scale and zero point. This is done for tensor-wise quantization. How the quantized values are obtained for object detection bounding box coordinates? Does it follow the same fashion? In Tensorflow, they provide the custom ops for the operations that can not be quantized in a conventional way. Where can I get the detailed implementation of them, especially TFLite_Detection_PostProcess?

1 Answers1

0

The implementation for TFLite_Detection_PostProcess is in https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/kernels/detection_postprocess.cc

Regardless of quantization, the output of TFLite_Detection_PostProcess is always float, so I don't think you need to care about it here.

Thaink
  • 96
  • 3