1

l'm trying to making a model for grayscale images. It looks like there is a problem with the output shape and l tried to add a padding to the conv2d put it gives me the error of an input shape in the testing. the model

with implementation:

model=keras.Sequential()

model.add(Conv2D(64, kernel_size=(48, 48), activation='relu', input_shape=(105,105,1)))
model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=(2, 2), padding='same'))

model.add(Conv2D(128, kernel_size=(24, 24), activation='relu'))
model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Conv2DTranspose(128, (24,24), strides = (2,2), activation = 'relu', padding='same', kernel_initializer='uniform'))
model.add(UpSampling2D(size=(2, 2)))

model.add(Conv2DTranspose(64, (12,12), strides = (2,2), activation = 'relu', padding='same', kernel_initializer='uniform'))
model.add(UpSampling2D(size=(2, 2)))

model.add(Conv2D(256, kernel_size=(12, 12), activation='relu'))

model.add(Conv2D(256, kernel_size=(12, 12), activation='relu'))

model.add(Conv2D(256, kernel_size=(12, 12), activation='relu'))

model.add(Flatten())

model.add(Dense(4096, activation='relu'))

model.add(Dropout(0.5))

model.add(

Dense(4096,activation='relu'))

model.add(Dropout(0.5))

model.add(Dense(2383,activation='relu'))

model.add(Dense(5, activation='softmax'))

the error:

ValueError: One of the dimensions in the output is <= 0 due to downsampling in conv2d_9. Consider increasing the input size. Received input shape [None, 105, 105, 1] which would produce output shape with a zero or negative value in a dimension.
muhammed katlish
  • 11
  • 1
  • 1
  • 4

2 Answers2

1

I think the error is with substracting the kernel size of (48,48) from (1, 105,105). Try to change the input from (105, 105, 1) to (1, 105,105) using data_format:

model.add(Conv2D(64, kernel_size=(48, 48), activation='relu', input_shape=(105,105,1), data_format='channels_first')))

You can read about it here: Negative dimension size caused by subtracting 3 from 1 for 'conv2d_2/convolution'

1

A little late but if you use padding = 'same' it should work as well:

model.add(Conv2D(64, kernel_size=(48, 48), activation='relu', input_shape=(105,105,1), padding='same'))

This basically keeps the output size the same as the input size.

Azhar Khan
  • 3,829
  • 11
  • 26
  • 32
b4401
  • 11
  • 1