I'm new to Tensorflow and i'm trying to train a model for which I am using an ImageDataGenerator:
data_gen = tf.keras.preprocessing.image.ImageDataGenerator(
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
rescale=1./255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
BATCH_SIZE = 32
train_gen = data_gen.flow_from_dataframe(
mapping_train,
x_col = 'path',
y_col = 'label',
target_size = (256, 256),
batch_size = BATCH_SIZE
)
mapping_train
is a dataframe that contains the path_to_image and class_label columns.
I'm applying transfer learning using VGG16 and calling the fit_generator method as:
model_1.fit_generator(train_gen,
steps_per_epoch=mapping_train.shape[0] / BATCH_SIZE,
epochs=3
)
However, my GPU is only at less than 2% at all times but my VRAM usage is almost full (4GB) with a batch size of 32.
TensorFlow does recognize my GPU and calling tf.config.list_physical_devices('GPU')
prints GTX 1050Ti on my console.
I have tried:
- Increasing batch_size: Caused OOM error
- Tried this answer by using
use_multiprocessing = True
andworkers = 4
- Tried this answer by casting my labels into np.uint8
None of them worked. Would really appreciate some help with this.