I am trying to create a tower like the one mentioned here (How to have parallel convolutional layers in keras? ) with VGG16 and VGG19. I am reading images from as directory using flow_from_directory for train and valid. I will load VGG16 and VGG19 using pre-trained imagenet-weights and then merge the layers as inputs to some other models. I am having problems trying to figure out how to have the same input fed to multiple models. I have come across this generator function in one forum that I am using. This will feed multiple images to a network as input. However, this seems like an overkill for my case.
def generate_generator_multiple(generator,dir1, dir2, batch_size, img_height,img_width):
genX1 = generator.flow_from_directory(dir1, target_size = (img_height,img_width), class_mode = 'categorical', batch_size = batch_size, shuffle=False, seed=7)
genX2 = generator.flow_from_directory(dir2, target_size = (img_height,img_width), class_mode = 'categorical', batch_size = batch_size, shuffle=False, seed=7)
while True:
X1i = genX1.next()
X2i = genX2.next()
yield [X1i[0], X2i[0]], X2i[1] #Yield both images and their mutual label
Is there a simpler way to feed same input to multiple networks instead of giving multiple inputs. I have tried the code from https://datascience.stackexchange.com/questions/30423/how-to-pass-common-inputs-to-a-merged-model-in-keras?answertab=active#tab-top, but it gives me Graph disconnected error.