5

I have used two image net trained models i.e. VGG16 and inception using following lines in python using Keras API; where x is the input image and batch size is for simplicity =1.

VGGbase_model = InceptionV3(weights='imagenet', include_top=False, 
input_shape=(299,299,3))
Inceptionbase_model = VGG16(weights='imagenet', include_top=False, 
input_shape=(224,224,3))
predictVgg16= VGGbase_model.predict_on_batch(x)
predictinception= Inceptionbase_model.predict_on_batch(x)

I have observed that VGG16 model predict with an output dimension of (1,512) , i understand 512 is the Features as predicted by the VGG16. however the inception model outputs a dimension of 1,8,8,2048. I understand 2048 is the feature vector as predicted by inception , but what is 8,8 and why VGG16 only have two dimensions while inception have 3. Any comments please.

Nhqazi
  • 732
  • 3
  • 12
  • 30

2 Answers2

5

You can view all layers size by just typing:

print(Inceptionbase_model.summary())
print(VGGbase_model.summary())

of you can see it here: InceptionV3, vgg16

InceptionV3 has shape (None,8,8,2048) at the last convolutional layer and vgg16 (None, 7, 7, 512). If you want to get features from each model you can do that by calling the model with include_top=False and pooling='avg' or pooling='max' (this will add a pooling layer at the end and will output 2048 features for the InceptionV3 model and 512 for vgg16.

ex.

img_shape=(299,299,3)
Inceptionbase_model = InceptionV3(input_shape=img_shape, weights='imagenet', include_top=False, pooling='avg')
Ioannis Nasios
  • 8,292
  • 4
  • 33
  • 55
0

you can use

output_layer = VGG16_model.layers[i].output
Suraj Rao
  • 29,388
  • 11
  • 94
  • 103
  • 1
    Please don't post only code as answer, but also provide an explanation what your code does and how it solves the problem of the question. Answers with an explanation are usually more helpful and of better quality, and are more likely to attract upvotes. – Mark Rotteveel Jun 09 '22 at 10:54