I am currently looking for a way i can combine the output of multiple model into one model, I need to create a CNN network that does classification.
The image is separated into sections (as seen by the colors), each section is given as input to a certain model (1,2,3,4) the structure of each model is the same, but each section is given to a separate model to ensure that the the same weight is not applied on whole image - My attempt to avoid full weight sharing, and keeping the weight sharing local. Each model then perform convolution and max pooling, and generate some sort of output that has to fed into a dense layer that takes the outputs from the prior models (model 1,2,3,4,) and performs classifications.
My question here is it possible to create model 1,2,3,4 and connect it to the fully connected layer and train all the models given the input sections and and the output class - without having to define the outputs of the convolution and pooling layer in keras?