Based on your comment and seeing that you defined a stride of 2, I believe what you want to achieve is an output size that's exactly half of the input size, i.e. output_shape == (32, 40, 32)
(the second 32 is the features).
In that case, just call model.summary()
on the final model and you will see if that is the case or not.
If it is, there's nothing else to do.
If it's bigger than you want, you can add a Cropping2D
layer to cut off pixels from the borders of the image.
If it's smaller than you want, you can add a ZeroPadding2D
layer to add zero-pixels to the borders of the image.
The syntax to create these layers is
Cropping2D(cropping=((a, b), (c, d)))
ZeroPadding2D(padding=((a, b), (c, d)))
a
: number of rows you want to add/cut off to/from the top
b
: number of rows you want to add/cut off to/from the bottom
c
: number of columns you want to add/cut off to/from the left
d
: number of columns you want to add/cut off to/from the right
Note however, that there is no strict technical need to always perfectly half the size with each convolution layer. Your model might work well without any padding or cropping. You will have to experiment with it in order to find out.