Edit: I know that enlarge the validation data size will increase the total time each epoch. But in my case, it increases the total time cost by 3 times! That's where the problem is.
As mentioned I'm trying to train a model on Google Colab with keras tensorflow.
Here's the data information:
Train Data : shape-(3000, 227, 227, 1) type--float32
Train Labels : shape-(3000, 2) type--float32
Validation Data : shape-(200, 227, 227, 1) type--float32
Validation Labels : shape-(200, 2) type--float32
I train my model using the following command:
history = model.fit(
x=self.standardize(self.train_data),
y=self.train_labels,
batch_size=1024,
epochs=base_epochs,
verbose=2,
callbacks=cp_callback,
validation_data=(self.standardize(self.val_data), self.val_labels),
)
With 200 images as the validation set each epoch takes only 1~2s.
Now I tried to use a larger validation set with 3000 images in the validations set. In this situation, each epoch takes unbelievably 8~10s! This means that forward propagation is slower than backpropagation, which doesn't make any sense. Does anyone know where the problem is? If more details are required I'll give more specific codes.