The input to a FCN is a 2D array of dimensions (1,48,28) as shown in the image below. The first layer is a Convulational layer with 64 kernels ans padding "same" and the output thus has the dimensions as that of the input but with 64 channels.
Image of the network
However, in the next MaxPool layer, with a kernel size of 2X2, and padding set to "Same", why is the dimension of the output decreasing. And how much is it?
Here is the first few lines of the code of the layers of the neural network.
main_input = Input(shape=main_input_shape,name='main_input')
x=Conv2D(64, kernel_size=(3,3),padding='same')(main_input)
x=MaxPooling2D(pool_size=(2,2),padding='same')(x) # 1
x=LeakyReLU(alpha=0.2)(x)
x=BatchNormalization(axis=-1,momentum=0.99,epsilon=0.001,center=True,scale=True,
beta_initializer='zeros', gamma_initializer='ones', moving_mean_initializer='zeros',
moving_variance_initializer='ones')(x)
x=Conv2D(64, kernel_size=(3,3),padding='same')(x)
x=MaxPooling2D(pool_size=(2,2),padding='same')(x) #2
Since the MaxPool layer is set with a padding of same, I was expecting the output of this layer to have the same dimensions as that of the input (48,128,64).