3

First I have download a quantized model from Mobilenet. It is contained in Mobilenet_v1_1.0_224. Then I did the following

bazel-bin/tensorflow/contrib/lite/toco/toco \
> --input_files=Sample/mobilenet_v1_1.0_224/quantized_graph.pb \
> --input_format=TENSORFLOW_GRAPHDEF --output_format=TFLITE \
> --output_file=Sample/mobilenet_v1_1.0_224/quantized_graph.tflite --inference_type=QUANTIZED_UINT8 \
> --input_shape=1,224,224,3 \
> --input_array=input \
> --output_array=MobilenetV1/Predictions/Reshape_1 \
> --mean_value=128 \
> --std_value=127

The following is the summary of the graph

bazel-bin/tensorflow/tools/graph_transforms/summarize_graph --in_graph=Sample/mobilenet_v1_1.0_224/quantized_graph.pb
Found 1 possible inputs: (name=input, type=float(1), shape=[1,224,224,3]) 
No variables spotted.
Found 1 possible outputs: (name=MobilenetV1/Predictions/Reshape_1, op=Reshape) 
Found 4227041 (4.23M) const parameters, 0 (0) variable parameters, and 0 control_edges
Op types used: 91 Const, 27 Add, 27 Relu6, 15 Conv2D, 13 DepthwiseConv2dNative, 13 Mul, 10 Dequantize, 2 Reshape, 1 Identity, 1 Placeholder, 1 BiasAdd, 1 AvgPool, 1 Softmax, 1 Squeeze
To use with tensorflow/tools/benchmark:benchmark_model try these arguments:
bazel run tensorflow/tools/benchmark:benchmark_model -- --graph=Sample/mobilenet_v1_1.0_224/quantized_graph.pb --show_flops --input_layer=input --input_layer_type=float --input_layer_shape=1,224,224,3 --output_layer=MobilenetV1/Predictions/Reshape_1

So by doing the conversion, I ran into the following error

2018-03-01 23:12:03.353786: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1172] Converting unsupported operation: Dequantize 2018-03-01 23:12:03.354513: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1172] Converting unsupported operation: Dequantize 2018-03-01 23:12:03.355177: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1172] Converting unsupported operation: Dequantize 2018-03-01 23:12:03.355556: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1172] Converting unsupported operation: Dequantize 2018-03-01 23:12:03.355921: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1172] Converting unsupported operation: Dequantize 2018-03-01 23:12:03.356281: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1172] Converting unsupported operation: Dequantize 2018-03-01 23:12:03.356632: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1172] Converting unsupported operation: Dequantize 2018-03-01 23:12:03.357540: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1172] Converting unsupported operation: Dequantize 2018-03-01 23:12:03.358776: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1172] Converting unsupported operation: Dequantize 2018-03-01 23:12:03.360448: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1172] Converting unsupported operation: Dequantize 2018-03-01 23:12:03.366319: I tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.cc:39] Before Removing unused ops: 140 operators, 232 arrays (0 quantized) 2018-03-01 23:12:03.371405: I tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.cc:39] Before general graph transformations: 140 operators, 232 arrays (0 quantized) 2018-03-01 23:12:03.374916: I tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.cc:39] After general graph transformations pass 1: 63 operators, 152 arrays (1 quantized) 2018-03-01 23:12:03.376325: I tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.cc:39] Before pre-quantization graph transformations: 63 operators, 152 arrays (1 quantized) 2018-03-01 23:12:03.377492: F tensorflow/contrib/lite/toco/tooling_util.cc:1272] Array MobilenetV1/MobilenetV1/Conv2d_0/Relu6, which is an input to the DepthwiseConv operator producing the output array MobilenetV1/MobilenetV1/Conv2d_1_depthwise/Relu6, is lacking min/max data, which is necessary for quantization. Either target a non-quantized output format, or change the input graph to contain min/max information, or pass --default_ranges_min= and --default_ranges_max= if you do not care about the accuracy of results.

Thanks for any help

Liu Hantao
  • 620
  • 1
  • 9
  • 19

1 Answers1

2

I think you may be pointing to an old TensorFlow quantized mobilenet model.

We have updated quantized mobilenet models available here. The specific link for your depth multiplier of 1.0 and image size of 224 is this.

These tar files also come with the already converted TFLite flatbuffer model as well.

I hope that helps!

suharshs
  • 1,088
  • 8
  • 10
  • Thank you! I wonder what should be the values need to be passed into TensorFlow lite converter? Any difference from what my demonstration code up on the top? – Liu Hantao Mar 28 '18 at 01:34
  • Yes, you need --mean_value=127.5 and --std_value=127.5. This is a function of the preprocessing used for the images when training the model. If you don't want to mess with conversion you can use the already converter FlatBuffer. Thanks! – suharshs Mar 29 '18 at 01:57
  • Hello Suharshs, I am trying to do post training quantization with Mobilenet V1, but facing some issues, can you please have a look here- https://stackoverflow.com/questions/57869149/post-training-quantization-for-mobilenet-v1-not-working?noredirect=1#comment102198662_57869149 – MMH Sep 18 '19 at 09:52