I'm quite confused of the normalization process when using the object detection api.
I'm using the SSD MobileNet v2 320x320 from the model zoo. In the pipeline config used for training I don't specify any additional preprocessing steps besides what is already defined my default.
The inference of the model from the checkpoint files works fine. However the tflite inference only seems to work if I add normalization before feeding the image to the net. I use the following line for this:
image = (image-127.5)/127.5
But I don't get why the preprocessing is helping when I didn't use it while training. The documentation also says that normalization only needs to be added for the inference when it was used for training.
What am I missing? Is there any preprocessing done in training by default that is not in the defined pipeline? If so I couldn't find it.