21

I see that the imageDataGenerator allows me to specify different styles of data normalization, e.g. featurewise_center, samplewise_center, etc.

I see from the examples that if I specify one of these options, then I need to call the fit method on the generator in order to allow the generator to compute statistics like the mean image on the generator.

(X_train, y_train), (X_test, y_test) = cifar10.load_data()
Y_train = np_utils.to_categorical(y_train, nb_classes)
Y_test = np_utils.to_categorical(y_test, nb_classes)

datagen = ImageDataGenerator(
    featurewise_center=True,
    featurewise_std_normalization=True,
    rotation_range=20,
    width_shift_range=0.2,
    height_shift_range=0.2,
    horizontal_flip=True)

# compute quantities required for featurewise normalization
# (std, mean, and principal components if ZCA whitening is applied)
datagen.fit(X_train)

# fits the model on batches with real-time data augmentation:
model.fit_generator(datagen.flow(X_train, Y_train, batch_size=32),
                samples_per_epoch=len(X_train), nb_epoch=nb_epoch)

My question is, how does prediction work if I have specified data normalization during training? I can't see how in the framework I would even pass knowledge of the training set mean/std deviation along to predict to allow me to normalize my test data myself, but I also don't see in the training code where this information is stored.

Are the image statistics needed for normalization stored in the model so that they can be used during prediction?

Marcin Możejko
  • 39,542
  • 10
  • 109
  • 120
Alex Taylor
  • 1,402
  • 1
  • 9
  • 15

4 Answers4

25

Yes - this is a really huge downside of Keras.ImageDataGenerator that you couldn't provide the standarization statistics on your own. But - there is an easy method on how to overcome this issue.

Assuming that you have a function normalize(x) which is normalizing an image batch (remember that generator is not providing a simple image but an array of images - a batch with shape (nr_of_examples_in_batch, image_dims ..) you could make your own generator with normalization by using:

def gen_with_norm(gen, normalize):
    for x, y in gen:
        yield normalize(x), y

Then you might simply use gen_with_norm(datagen.flow, normalize) instead of datagen.flow.

Moreover - you might recover the mean and std computed by a fit method by getting it from appropriate fields in datagen (e.g. datagen.mean and datagen.std).

Marcin Możejko
  • 39,542
  • 10
  • 109
  • 120
  • 2
    @MarcinMożejko suppose I define train_datagen from an imagedatagenerator with featurewise_center=True (plus augmentations) and a train_gen as a train_datagen.flow_from_directory. I do the same for test_datagen and test_gen. Could you tell me how to fit your method in this context? – Patrick Feb 18 '18 at 13:28
24

Use the standardize method of the generator for each element. Here is a complete example for CIFAR 10:

#!/usr/bin/env python

import keras
from keras.datasets import cifar10
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
from keras.layers import Conv2D, MaxPooling2D

# input image dimensions
img_rows, img_cols, img_channels = 32, 32, 3
num_classes = 10

batch_size = 32
epochs = 1

# The data, shuffled and split between train and test sets:
(x_train, y_train), (x_test, y_test) = cifar10.load_data()
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')

# Convert class vectors to binary class matrices.
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)

model = Sequential()

model.add(Conv2D(32, (3, 3), padding='same', activation='relu',
                 input_shape=x_train.shape[1:]))
model.add(Conv2D(32, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))

model.add(Conv2D(64, (3, 3), padding='same', activation='relu'))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))

model.add(Flatten())
model.add(Dense(512, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(num_classes, activation='softmax'))

model.compile(loss='categorical_crossentropy', optimizer='rmsprop',
              metrics=['accuracy'])

x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255

datagen = ImageDataGenerator(zca_whitening=True)

# Compute principal components required for ZCA
datagen.fit(x_train)

# Apply normalization (ZCA and others)
print(x_test.shape)
for i in range(len(x_test)):
    # this is what you are looking for
    x_test[i] = datagen.standardize(x_test[i])
print(x_test.shape)

# Fit the model on the batches generated by datagen.flow().
model.fit_generator(datagen.flow(x_train, y_train,
                                 batch_size=batch_size),
                    steps_per_epoch=x_train.shape[0] // batch_size,
                    epochs=epochs,
                    validation_data=(x_test, y_test))
Martin Thoma
  • 124,992
  • 159
  • 614
  • 958
  • Very handy, shame it's not in the Keras documentation. – Tom Walker Dec 01 '17 at 08:16
  • 2
    Doesn't the divide by 255 provide standardization?. Considering the input data pixel values ranges from 0 to 255. – timedacorn Dec 26 '18 at 10:12
  • I tried to do this, but I my code gets stuck forever at `datagen.fit(x_train)`, I think there is some dimensionality issue, see my post here https://stackoverflow.com/questions/59848525/keras-zca-whitening-gets-stuck-with-train-datagen-fit. Do you know what's wrong? – NeStack Jan 22 '20 at 12:00
  • Passing 'rescale=1./255 ' as parameter to ImageDataGenerator already normalize the data – huseyin39 May 11 '20 at 12:02
3

I also had the same issue and I solved it using the same functionality, that the ImageDataGenerator used:

# Load Cifar-10 dataset
(trainX, trainY), (testX, testY) = cifar10.load_data()
generator = ImageDataGenerator(featurewise_center=True, 
                               featurewise_std_normalization=True)

# Calculate statistics on train dataset
generator.fit(trainX)
# Apply featurewise_center to test-data with statistics from train data
testX -= generator.mean
# Apply featurewise_std_normalization to test-data with statistics from train data
testX /= (generator.std + K.epsilon())

# Do your regular fitting
model.fit_generator(..., validation_data=(testX, testY), ...)

Note that this is only possible if you have a reasonable small dataset, like CIFAR-10. Otherwise the solution proposed by Marcin sounds good more reasonable.

Community
  • 1
  • 1
Alexander Pacha
  • 9,187
  • 3
  • 68
  • 108
3

I am using the datagen.fit function itself.

from keras.preprocessing.image import ImageDataGenerator

train_datagen = ImageDataGenerator(
    featurewise_center=True,
    featurewise_std_normalization=True)
train_datagen.fit(train_data)

test_datagen = ImageDataGenerator(  
    featurewise_center=True, 
    featurewise_std_normalization=True)
test_datagen.fit(train_data)

Ideally with this, test_datagen fitted on training dataset will learn the training datasets statistics. Then it will use these statistics to normalize testing data.

Hari
  • 197
  • 1
  • 7