I have a trained model for image demosaicing, and I want to make it smaller, by removing filters in over spec'd layers.
For example, I want to take the following model (exract):
conv1 = Conv2D(32, self.kernel_size, activation='relu', padding='same')(chnl4_input)
conv2 = Conv2D(32, self.kernel_size, strides=(2, 2), activation='relu', padding='same')(conv1)
conv5 = Conv2D(64, self.kernel_size, activation='relu', padding='same')(conv2)
conv6 = Conv2D(64, self.kernel_size, activation='relu', padding='same')(conv5)
up1 = concatenate([UpSampling2D(size=(2, 2))(conv6), conv1], axis=-1)
conv7 = Conv2D(64, self.kernel_size, activation='relu', padding='same')(up1)
and I want to change the conv5 and conv6 layers to this:
conv1 = Conv2D(32, self.kernel_size, activation='relu', padding='same')(chnl4_input)
conv2 = Conv2D(32, self.kernel_size, strides=(2, 2), activation='relu', padding='same')(conv1)
conv5 = Conv2D(32, self.kernel_size, activation='relu', padding='same')(conv2)
conv6 = Conv2D(32, self.kernel_size, activation='relu', padding='same')(conv5)
up1 = concatenate([UpSampling2D(size=(2, 2))(conv6), conv1], axis=-1)
conv7 = Conv2D(64, self.kernel_size, activation='relu', padding='same')(up1)
I've looked around, but haven't seen any glaringly obvious ways to do this. I found this example of a similar problem, but the solution specifically mentions that the new layers must have the same amount of filters as the old layers, which is no good for me.
If anyone has any idea how I could do this, I'd really appreciate it.
[EDIT]: To clarify, I have an existing model, say 'model A'. I want to create a new model, 'model B'. These two models will be the same, with the exception of the layers I mentioned above. I am looking for a way to initialise the new model with the old models weights for all layers except the ones that have been changed. The new model would then be trained to convergence as per usual.