I am using the Qubvel segmentation models https://github.com/qubvel/segmentation_models repository to train an Inception-V3-encoder based model for a binary segmentation task. I am using (256 width x 256 height) images to train the models and they are working good. If I double one of the dimensions, say for example, (256 width x 512 height), it works fine as well. However, when I make adjustments for the aspect ratio and resize the images to a custom dimension, say (272 width x 256 height), the model throws an error as follows:
ValueError: A `Concatenate` layer requires inputs with matching shapes except for the concatenation axis. Received: input_shape=[(None, 16, 18, 2048), (None, 16, 17, 768)]
Is there a way to use such custom dimensions to train these models?