The question is related to inferencing from a tflite model converted from standard keras-tensorflow Mobilenetv2 model.
tf version: 2.2.0
- The model has been trained using 0-1 normalization as provided in documentation/example: here
- After conversion to tflite(non-quantized/optimised version), android sample uses a preprocessing of (-1, 1) which can be found in: here in android documentation. Also here in python documentation.
Why is this difference in inference pipeline? Can someone help with correct steps for both quantized and non-quantized (floating point model) tflite model for 0-1 normalisation based model?