1

similar problems to this have been posted and answered on this forum but this particular case I didn't find any solution. ( I'm using Keras )

I have images of the shape (150,75,3) and I reshaped the numpy array to (1,150,75,3)

This is supposed to work but this error comes out:

ValueError: Error when checking input: expected conv2d_input to have 4 dimensions, but got array with shape (1, 1, 1, 150, 75, 3)

EDIT : this is how I process the image :

self.pinballEnvironmrnt = PinballEnv(self.screenDimensions,self.startPosition)
    image = pygame.surfarray.array3d(self.pinballEnvironmrnt.screen)
    #image = Image.fromarray(image)
    #image = image.resize(self.resize)
    self.state = numpy.array([image])#.reshape((-1,1200,600,3))
    print('the shape of the state ------------------> ',self.state.shape)

output ( error not included in this pic): enter image description here and this is the DQN agent :

pool_size = (2, 2)
    # MODEL 1
    self.model = Sequential()
    self.model.add(Conv2D(4, (3, 3),input_shape=(150,75,3),activation='relu'))
    # Conv Layer 2
    self.model.add(Conv2D(8, (3, 3),activation='relu'))
    # Pooling 1
    self.model.add(MaxPooling2D(pool_size=pool_size))

    self.model.add(Flatten())
    self.model.add(Dense(128,activation='relu'))
    self.model.add(Dense(64,activation='relu'))
    self.model.add(Dense(16,activation='relu'))
    self.model.add(Dense(actions,activation='linear'))
    print(self.model.summary())
zed_eln
  • 13
  • 5
  • Please add the code where you create your input and define your model, otherwise there only be some random guess. – Kaveh Aug 28 '21 at 04:54
  • ok i added the code , i hope the problem is more clear now – zed_eln Aug 28 '21 at 20:45
  • Still unclear. The error raise when you are feeding input to the `model.fit()`. So, you should add this line. Also `model.compile()` and where you exactly define and reshape your input. You have added some codes which reshapes an array to `(1,1200,600,3)`, but your error is about an array with shape `(1,150,75,3)`. – Kaveh Aug 29 '21 at 11:19

1 Answers1

0

You have given too many dimensions as input to the Conv2D layer. Consider that images are represented in NumPy as matrices of shape (height, width, channels). In general, when dealing with images, machine learning libraries expect as inputs batches of data, so the actual input shape should be (batch_size, height, width, channels), and that's why the error says that the expected dimensions are 4. Practically, if you have a variable images which is a list of all your images, then the batch dimension could be added like np.array(images) or tf.convert_to_tensor(images) if you use tensorflow.

simocasci
  • 11
  • 3
  • I should also add that if you only have a single image to feed to the model, then it should first be inserted into a list, like this: `np.array([image])` or `tf.convert_to_tensor([image])`. – simocasci Aug 28 '21 at 09:34
  • i added more code to make the problem clearer – zed_eln Aug 28 '21 at 20:46