I am trying to use Keras ImageDataGenerator to train my model with a large stereo Dataset.
For each scene, I have two rgb images, I have to split them and concatenate them to have 6 one-channel images as an input of my model (i.e. this shape (6,224,224,1)). For small datasets, it is easy because I can upload the two sub-datasets in the memory and process the concatenation of images as ndarrays. But with ImageDataGenerator, it is not the same, as I have to make sure that it takes the same batches from the two sub-datasets, and be able to process the concatenation before passing the input to my model.
Inspired from this post, I tried this code:
input_imgen = ImageDataGenerator()
def generate_generator_multiple(generator,dir1, dir2, batch_size):
genX1 = generator.flow_from_directory(directory=dir1,
color_mode="rgb",
batch_size=batch_size,
class_mode="categorical",
shuffle=False)
genX2 = generator.flow_from_directory(directory=dir2,
color_mode="rgb",
batch_size=batch_size,
class_mode="categorical",
shuffle=False)
while True:
X1i = genX1.next()
X2i = genX2.next()
yield [X1i[0], X2i[0]], X2i[1]
In my case, how could I process the two series X1i[0] and X2i[0] with batches of images of size( 224,224,3) to have X1Sum of size(6,224,224), and instead of: yield [X1i[0], X2i[0]], X2i[1] I will have: yield X1Sum, X2i[1]