I am trying to build a CNN and want to divide my input images into non-overlapping patches and then use it for training.
However, I am unsure how to combine the extraction of patches with the code below.
I believe a function like tf.image.extract_patches
should do the trick but I am unsure how I can include it in the pipeline. It's important for me to use flow_from_directory
as I have organised my dataset accordingly.
train_datagen = ImageDataGenerator(rescale = 1./255)
train_generator = train_datagen.flow_from_directory(train_dir,target_size=(64,64),class_mode='categorical',batch_size=64)
I thought of using extract_patches_2d
from scikit but it has two issues :
- It gives random overlapping patches
- I need to resave all images and again reorganize my dataset (same issue as
tf.image.extract_patches
unless included in pipeline)