0

There are quantized MobileNet v1 models available at https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet_v1.md

I see floating point scaling values associated with the weights and biases in the model, but it isn't evident how these should be used in the operations scaling.

The GEMMLOWP quantization info describes scaling values associated with input, weight and the operation's accumulator downscale.

Should the bias scaling value be used alone for down-scaling the accumulator, or is the weight scaling value required?

In short, I'm trying to determine how the two provided scaling values should be used. Thanks.

Jay Norwood
  • 146
  • 3
  • The tensorflow depthwiseconv_uint8.h has a call to MultiplyByQuantizedMultiplier, which provides integer shifts and multiplier. I'm assuming these are related to the float bias scaling constant in the tflite file, but it is not evident where the conversion to these values was done (or how). https://github.com/tensorflow/tensorflow/blob/r1.10/tensorflow/contrib/lite/kernels/internal/reference/depthwiseconv_uint8.h . – Jay Norwood Sep 12 '18 at 19:50
  • GetQuantizedConvolutionMultipler returns a double value that is used to calculate the integer mpy and shift for the downscale. However, it uses the bias scale only for sanity test. It uses input, output and weight scales in calculation. The model browser doesn't show input and output scale values, so are these initialized at runtime? https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/lite/kernels/kernel_util.cc – Jay Norwood Sep 12 '18 at 21:53
  • It appears the input/output tensor scale values were just not being displayed by the Netron viewer. They fixed this yesterday. Very nice browser, btw. So, I think I've managed to resolve this issue with the info above. You can see in the code links the tflite use of the float scale values to create the parameters needed for the integer only operations, and with the Netron update, you should be able to see the per layer scale values needed in the operations. – Jay Norwood Sep 13 '18 at 15:04
  • I am trying to figure out what to do with those parameters for the Mobilenetv2 model. Please, see my question [here](https://stackoverflow.com/questions/59118407/calculation-operations-with-the-parameters-of-a-tflite-quantized-model) – Nazar Nov 30 '19 at 21:23

0 Answers0