4

I want to extract features of a 368x368 sized image with VGG pretrained model. According to documentation VGGnet accepts 224x224 sized images. Is there a way to give variable sized input to Keras VGG?

Here is my code:

# VGG Feature Extraction
x_train = np.random.randint(0, 255, (100, 224, 224, 3))
base_model = VGG19(weights='imagenet')
modelVGG = Model(inputs=base_model.input, outputs=base_model.get_layer('block4_conv2').output)
block4_conv2_features = modelVGG.predict(x_train)

Edited code (It works!)

# VGG Feature Extraction
x_train = np.random.randint(0, 255, (100, 368, 368, 3))
base_model = VGG19(weights='imagenet', include_top=False)
modelVGG = Model(inputs=base_model.input, outputs=base_model.get_layer('block4_conv2').output)
block4_conv2_features = modelVGG.predict(x_train)
stop-cran
  • 4,229
  • 2
  • 30
  • 47
mkocabas
  • 703
  • 6
  • 19

1 Answers1

5

The input size affects the number of neurons in the fully-connected (Dense) layers. So you need to create your own fully-connected layers.

Call VGG19 with include_top=False to remove the fully-connected layers and then add them yourself. Check this code for reference.

Fábio Perez
  • 23,850
  • 22
  • 76
  • 100