I wish to use transfer learning to process images, and my images have different sizes. I think in general convolutional layers can take variable input size, but fully connected layers can only take input of specific size. However, the Keras implementation of VGG-16 or ResNet50 can take any image size larger than 32x32, although they do have fully connected layers. I wonder how it is done to get fix fully connected layer size for different image dimensions?
Thanks very much!