I'm trying to preprocess my data to resize the training set images to 224 * 224 with 3 channels to use it as input to VGG 16 model and I'm running out of RAM. How do I resolve this?
new_size = (224,224)
new_x_train = []
for image in x_train:
image = x_train[image]
image = tf.constant(image)
image = tf.expand_dims(image, axis = -1)
image = tf.concat([image, image, image], axis = -1)
image = tf.image.resize(image,new_size)
new_x_train.append(image)
new_x_train = tf.stack(new_x_train)
This works for a single image. However, when i try to do the same thing for all the 60000 images using a loop, I run out of RAM