Before training with a resnet50 model, I preprocessed my input using:
img = image.load_img(os.path.join(TRAIN, img), target_size=[224, 224])
img = image.img_to_array(img)
img = np.expand_dims(img, axis=0)
img = preprocess_input(img)
and save a numpy array of images.
I found out that without preprocess_input
, the size of the array is 1.5G, with preprocess_input, the size is 7G.
Is that a normal behavior? Or am I missing something?
Why does Zero-center by mean pixel
drastically increase the input size?
This is how zero center by mean pixel
is defined in keras
x = x[..., ::-1]
x[..., 0] -= 103.939
x[..., 1] -= 116.779
x[..., 2] -= 123.68