0

I am using VGG16 along with its pre-saved weights. and as this VGG16 was trained on (244 * 244) dimension Images. So can we change the input dimension to like (128 * 128)

baseModel = VGG16(weights="imagenet", include_top=False,input_tensor=Input(shape=(128, 128, 3)))

To understand the scenario lets we have first layer as Conv2D in our baseModel, with filtersize (3,3) and total 16 filters, padding='valid' .

so it will output (1 * 1 * 16) output when input image shape is (3 * 3 * 3)

but when input image shape is let say (2 * 2 * 3) we see we can't apply (3,3) filter in case of valid padding. (since valid padding so we can't apply padding)

So here we will have error? Am I missing any concept here?

Raj_Ame09
  • 130
  • 1
  • 10

1 Answers1

0

You can change the resolution of the vgg16 model, but the width and height must be no less than 32, as per keras.io

MD Mushfirat Mohaimin
  • 1,966
  • 3
  • 10
  • 22