3

Some research papers mention that they used outputs of conv3, conv4, conv5 outputs of a VGG16 network trained on Imagenet

If I display the names of the layers of VGG16 like so:

base_model = tf.keras.applications.VGG16(input_shape=[h, h, 3], include_top=False)
base_model.summary()

I get layers with different names eg.

input_1 (InputLayer)         [(None, 512, 512, 3)]     0         
_________________________________________________________________
block1_conv1 (Conv2D)        (None, 512, 512, 64)      1792      
_________________________________________________________________
block1_conv2 (Conv2D)        (None, 512, 512, 64)      36928     
_________________________________________________________________
block1_pool (MaxPooling2D)   (None, 256, 256, 64)      0         
_________________________________________________________________
block2_conv1 (Conv2D)        (None, 256, 256, 128)     73856     
_________________________________________________________________
block2_conv2 (Conv2D)        (None, 256, 256, 128)     147584    
_________________________________________________________________
block2_pool (MaxPooling2D)   (None, 128, 128, 128)     0         
_________________________________________________________________
block3_conv1 (Conv2D)        (None, 128, 128, 256)     295168    
_________________________________________________________________
block3_conv2 (Conv2D)        (None, 128, 128, 256)     590080    
_________________________________________________________________
block3_conv3 (Conv2D)        (None, 128, 128, 256)     590080    
_________________________________________________________________
block3_pool (MaxPooling2D)   (None, 64, 64, 256)       0         
.....

So which layers do they mean by conv3, conv4, conv5 ? Do they mean the 3rd, 4th, 5th convolutional layers before each pooling (since vgg16 has 5 stages)?

1 Answers1

1

The Architecture of VGG16 can be obtained by the code shown below:

import tensorflow as tf
from tensorflow.keras.applications import VGG16

model = VGG16(include_top=False, weights = 'imagenet')
print(model.summary())

Architecture of VGG16 is shown below:

Model: "vgg16"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_2 (InputLayer)         [(None, None, None, 3)]   0         
_________________________________________________________________
block1_conv1 (Conv2D)        (None, None, None, 64)    1792      
_________________________________________________________________
block1_conv2 (Conv2D)        (None, None, None, 64)    36928     
_________________________________________________________________
block1_pool (MaxPooling2D)   (None, None, None, 64)    0         
_________________________________________________________________
block2_conv1 (Conv2D)        (None, None, None, 128)   73856     
_________________________________________________________________
block2_conv2 (Conv2D)        (None, None, None, 128)   147584    
_________________________________________________________________
block2_pool (MaxPooling2D)   (None, None, None, 128)   0         
_________________________________________________________________
block3_conv1 (Conv2D)        (None, None, None, 256)   295168    
_________________________________________________________________
block3_conv2 (Conv2D)        (None, None, None, 256)   590080    
_________________________________________________________________
block3_conv3 (Conv2D)        (None, None, None, 256)   590080    
_________________________________________________________________
block3_pool (MaxPooling2D)   (None, None, None, 256)   0         
_________________________________________________________________
block4_conv1 (Conv2D)        (None, None, None, 512)   1180160   
_________________________________________________________________
block4_conv2 (Conv2D)        (None, None, None, 512)   2359808   
_________________________________________________________________
block4_conv3 (Conv2D)        (None, None, None, 512)   2359808   
_________________________________________________________________
block4_pool (MaxPooling2D)   (None, None, None, 512)   0         
_________________________________________________________________
block5_conv1 (Conv2D)        (None, None, None, 512)   2359808   
_________________________________________________________________
block5_conv2 (Conv2D)        (None, None, None, 512)   2359808   
_________________________________________________________________
block5_conv3 (Conv2D)        (None, None, None, 512)   2359808   
_________________________________________________________________
block5_pool (MaxPooling2D)   (None, None, None, 512)   0         
=================================================================
Total params: 14,714,688
Trainable params: 14,714,688
Non-trainable params: 0

From the above architecture, in a General Sense,

  1. Conv3 means the output of the Layer, block3_pool (MaxPooling2D)
  2. Conv4 means the output of the Layer, block4_pool (MaxPooling2D)
  3. Conv5 means the output of the Layer, block5_pool (MaxPooling2D)

If you feel the explanation I have provided is not correct, please share the Research Papers which you are referring to, and I can update the Answer accordingly.

  • 1
    Thank you. The paper is: https://ieeexplore.ieee.org/abstract/document/9076866 and they mention the layers on the last line of page 3. –  Jul 16 '20 at 15:30