1

The question is related to inferencing from a tflite model converted from standard keras-tensorflow Mobilenetv2 model.

tf version: 2.2.0

  1. The model has been trained using 0-1 normalization as provided in documentation/example: here
  2. After conversion to tflite(non-quantized/optimised version), android sample uses a preprocessing of (-1, 1) which can be found in: here in android documentation. Also here in python documentation.

Why is this difference in inference pipeline? Can someone help with correct steps for both quantized and non-quantized (floating point model) tflite model for 0-1 normalisation based model?

Sanjeev
  • 53
  • 2
  • 5

1 Answers1

0

Different models may have different preprocessing settings. If you're confident that the original model is trained with (0,1) preprocessing, just simply modify the android example code you found.

https://github.com/tensorflow/examples/blob/40e3ac5b5c17ac75352b99747b8532272204365f/lite/codelabs/flower_classification/android/finish/app/src/main/java/org/tensorflow/lite/examples/classification/tflite/ClassifierFloatMobileNet.java#L28

For quant models, if you noticed similar normalization step, change it accordingly. Sometimes the preprocessing of quant models is just nothing, because the author combines the normalization step and quantization step (it's possible that they're equivalent to a no-op if combined together).

Xunkai
  • 36
  • 3