0

I am working on an object detection software, basically i am using TensorFlow objet detection API on Python with MobileNetV1, i have trained the model with my own dataset.

The frozen_inference_graph.pb file resulting of the training with my dataset is like 22 Mo.

I tried to convert it to TFLite with quantization but it is still like 21.2 Mo.

Is it normal that these two sizes are 20+ Mo ? I have read from differents sources that MobileNet quantized models are around 5 Mo. It is because I trained it on my custom dataset with new objects ? And also, why quantizing it does not reduce size (up to 4 times smaller) ?

Thank you for your help

  • Could you update your question with the exact command (or Python code) used for converting your model? It's difficult to see what might have gone wrong without seeing your actual command. Thanks. – yyoon Jun 24 '20 at 01:49
  • Thank you for your answer, the code I use for converting is exactly this one: http://www.noelshack.com/2020-27-1-1593418183-sans-titree.png – rfourdinier Jun 29 '20 at 08:10
  • wow my problem is solved i just had to replace "optimizations =" with "converter.optimizations =" because optimizations was not used in my code – rfourdinier Jun 29 '20 at 08:34

0 Answers0