I want to combine a pretrained VGG16 model with a special input block, which is an input layer and a convolutional layer. The goal is to use a pre-trained RGB VGG16 imagenet model on grayscale images:
from keras.applications.vgg16 import VGG16
from keras.layers.convolutional import Conv2D
from keras.layers import Input
from keras.models import Model
img_height = 299
img_width = 299
def input_block(img_height = 299, img_width = 299):
input_shape = (img_height, img_width, 1)
img_input = Input(shape=input_shape, name = 'grayscale_input_layer')
x = Conv2D(3, (3,3), padding= 'same', name = 'grayscale_RGB_layer')(img_input)
return x
pretrained_model = VGG16(weights = 'imagenet', include_top=False, input_tensor = input_block(img_height, img_width))
When I set the weight initalization of VGG16()
to 'None'
, the model builds correctly, with the following desired structure:
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
grayscale_input_layer (Input (None, 299, 299, 1) 0
_________________________________________________________________
grayscale_RGB_layer (Conv2D) (None, 299, 299, 3) 30
_________________________________________________________________
block1_conv1 (Conv2D) (None, 299, 299, 64) 1792
_________________________________________________________________
block1_conv2 (Conv2D) (None, 299, 299, 64) 36928
_________________________________________________________________
block1_pool (MaxPooling2D) (None, 149, 149, 64) 0
_________________________________________________________________
block2_conv1 (Conv2D) (None, 149, 149, 128) 73856
_________________________________________________________________
block2_conv2 (Conv2D) (None, 149, 149, 128) 147584
_________________________________________________________________
block2_pool (MaxPooling2D) (None, 74, 74, 128) 0
_________________________________________________________________
block3_conv1 (Conv2D) (None, 74, 74, 256) 295168
_________________________________________________________________
block3_conv2 (Conv2D) (None, 74, 74, 256) 590080
_________________________________________________________________
block3_conv3 (Conv2D) (None, 74, 74, 256) 590080
_________________________________________________________________
block3_pool (MaxPooling2D) (None, 37, 37, 256) 0
_________________________________________________________________
block4_conv1 (Conv2D) (None, 37, 37, 512) 1180160
_________________________________________________________________
block4_conv2 (Conv2D) (None, 37, 37, 512) 2359808
_________________________________________________________________
block4_conv3 (Conv2D) (None, 37, 37, 512) 2359808
_________________________________________________________________
block4_pool (MaxPooling2D) (None, 18, 18, 512) 0
_________________________________________________________________
block5_conv1 (Conv2D) (None, 18, 18, 512) 2359808
_________________________________________________________________
block5_conv2 (Conv2D) (None, 18, 18, 512) 2359808
_________________________________________________________________
block5_conv3 (Conv2D) (None, 18, 18, 512) 2359808
_________________________________________________________________
block5_pool (MaxPooling2D) (None, 9, 9, 512) 0
=================================================================
Total params: 14,714,718
Trainable params: 14,714,718
Non-trainable params: 0
_________________________________________________________________
None
However, when I set the weight initialization to 'imagenet'
,
I get the following error:
ValueError: You are trying to load a weight file containing 13 layers into a model with 14 layers.
This error makes sense, since I have added two layers in front of the VGG16 model instead of a single layer.
As a workaround, I have tried the following:
def input_block_model(img_height = 299, img_width = 299):
input_shape = (img_height, img_width, 1)
img_input = Input(shape=input_shape, name = 'grayscale_input_layer')
x = Conv2D(3, (3,3), padding= 'same', name = 'grayscale_RGB_layer')(img_input)
model = Model(img_input, x, name='input_block_model')
return model
input_model = input_block_model(299,299)
pretrained_model = VGG16(weights = "imagenet", include_top=False)
combined_model = Model(input_model.input,
pretrained_model(input_model.output))
print(combined_model.summary())
Then, the model structure is:
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
grayscale_input_layer (Input (None, 299, 299, 1) 0
_________________________________________________________________
grayscale_RGB_layer (Conv2D) (None, 299, 299, 3) 30
_________________________________________________________________
vgg16 (Model) multiple 14714688
=================================================================
Total params: 14,714,718
Trainable params: 14,714,718
Non-trainable params: 0
_________________________________________________________________
None
The disadvantage of this structure, is that I cannot set properties of layers within the VGG16 model. I want to freeze certain layers for example in this model, which I cannot access via combined_model.layers
. Does anyone have a working solution, such that I get the model structure as with the 'None'
initialization, but with pretrained ImageNet weights?