0

I know one is probably trained with quantization aware trained and is quantize while the other is not. Is there any difference in both of their checkpoints? because both have checkpoints of same size. I wanted to train ssd_mobilenet_v1 for my own dataset through quantization aware training for coral edge tpu. When i use the checkpoints of ssd_mobilenet_v1_quantized_coco, the program give me error but if i use the checkpoints of ssd_mobilenet_v1_coco The training starts successfully, although it is pretty slow because of this line graph_rewriter { quantization { delay: 0 weight_bits: 8 activation_bits: 8 } }

MrKhan
  • 154
  • 12
  • If you don't want quantization (looks like it), then you don't need to use the quantized version. The checkpoints will be different, since one is being trained with quantization-aware training while the other is not. – Sachin Joglekar Nov 08 '19 at 19:15

1 Answers1

0

Quantization is done post-training and the quantized model will have less precession (integer instead of float) of model weights so it is not used for training. To know more about quantization check out the below links for more details - https://www.tensorflow.org/lite/performance/post_training_quantization
https://medium.com/tensorflow/tensorflow-model-optimization-toolkit-post-training-integer-quantization-b4964a1ea9ba

Suman
  • 354
  • 3
  • 10