1

I did quantization on inception-resnet-v2 model using https://www.tensorflow.org/performance/quantization#how_can_you_quantize_your_models. Size of freezed graph(input for quantization) is 224,6 MB and quantized graph is 58,6 MB. I ran accuracy test for certain dataset wherein, for freezed graph the accuracy is 97.4% whereas for quantized graph it is 0%.

Is there a different way to quantize the model for inception-resnet versions? or, for inception-resnet model, quantization is not support at all?

Namitha
  • 79
  • 8

1 Answers1

-1

I think they transitioned from quantize_graph to graph_transforms. Try using this:

https://github.com/tensorflow/tensorflow/tree/master/tensorflow/tools/graph_transforms

And what did you use for the input nodes/output nodes when testing?

Sungsu Lim
  • 13
  • 3
  • I tried graph_transforms as well. Somehow for resnet model I couldn't get successful quatized_graph using graph_transform too.. Below are the links for your reference: https://stackoverflow.com/questions/44492936/graph-transform-gives-error-in-tensorflow https://github.com/tensorflow/tensorflow/issues/10739 input nodes: I tried with InputImage:0 and also with InputImage output nodes :InceptionResnetV2/Logits/Predictions – Namitha Jul 12 '17 at 13:34
  • @Namitha I got it working for inception_resnet_v2. Did u build from source and ran bazel build? But the problem is it's running slower. – Sungsu Lim Jul 13 '17 at 15:06
  • Yes I did that. I did a build from source and then ran bazel build. when did you tried with this? because I am not sure if some new changes in tensorflow source code is causing problem with me. I tried during June mid and I faced problems making it work. If I dont give quantize_nodes, it works but the accuracy is "0". It does very random predictions which is totally uncorrect. – Namitha Jul 14 '17 at 08:38
  • I did it last week. My guess is that you are using wrong input/output nodes?? For me, the trick was to add an input image tensor when freezing the graph, and quantizing that graph. Btw I also got a similar error to yours in stackoverflow.com/questions/44492936/… github.com/tensorflow/tensorflow/issues/10739. I got no OP named QuantizedBilinearAdd, but it went away after building again from source. – Sungsu Lim Jul 17 '17 at 00:32
  • And I made my own python script to load the quantized graph and test it. – Sungsu Lim Jul 17 '17 at 00:36
  • I give the input image tensor while freezing the graph. Like you said, I too had to build from source. But then, the resulting graph I was able to load but I got error when I run inference on it as mentioned in the "tensorflow" issue. – Namitha Jul 17 '17 at 13:20