2

I am trying to pass a RGB image from a simulator into my custom neural network. At the source of the RGB generation (simulator), the dimension of RGB image is (3,144,256).

This is how I construct the neural network:

rgb_model = Sequential()
rgb = env.shape() // this is (3, 144, 256)
rgb_shape = (1,) + rgb
rgb_model.add(Conv2D(96, (11, 11), strides=(3, 3), padding='valid', activation='relu', input_shape=rgb_shape, data_format = "channels_first"))
Now, my rbg_shape is (1, 3, 144, 256).

This is the error I get:

rgb_model.add(Conv2D(96, (11, 11), strides=(3, 3), padding='valid', activation='relu', input_shape=rgb_kshape, data_format = "channels_first"))
File "/usr/local/lib/python2.7/dist-packages/keras/engine/sequential.py", line 166, in add
layer(x)
File "/usr/local/lib/python2.7/dist-packages/keras/engine/base_layer.py", line 414, in call
self.assert_input_compatibility(inputs)
File "/usr/local/lib/python2.7/dist-packages/keras/engine/base_layer.py", line 311, in assert_input_compatibility
str(K.ndim(x)))
ValueError: Input 0 is incompatible with layer conv2d_1: expected ndim=4, found ndim=5

Why is keras complaining that the expected dimension is 5 when my actual dimension is 4?

P.S: I have the same question as this question. I ideally wanted to comment on that post but don't have enough reputation.

Edit:

Here is the code that deals with the error:

rgb_shape = env.rgb.shape
rgb_model = Sequential()
rgb_model.add(Conv2D(96, (11, 11), strides=(3, 3), padding='valid', activation='relu', input_shape=rgb_shape, data_format = "channels_first"))
rgb_model.add(Conv2D(128, (3, 3), strides=(2, 2), padding='valid', activation='relu', data_format = "channels_first" ))
rgb_model.add(Conv2D(384, (3, 3), strides=(1, 1), padding='valid', activation='relu', data_format = "channels_first"))
rgb_model.add(Conv2D(384, (3, 3), strides=(1,1), padding='valid', activation='relu', data_format = "channels_first"))
rgb_model.add(Conv2D(256, (3,3), strides=(1,1), padding='valid', activation='relu', data_format = "channels_first"))
rgb_model.add(Flatten())
rgb_input = Input(shape=rgb_shape)
rgb = rgb_model(rgb_input)

This is the new error when I pass env.rgb.shape as input_shape in Conv2D:

dqn.fit(env, callbacks=callbacks, nb_steps=250000, visualize=False, verbose=0, log_interval=100)
  File "/usr/local/lib/python2.7/dist-packages/rl/core.py", line 169, in fit
    action = self.forward(observation)
  File "/usr/local/lib/python2.7/dist-packages/rl/agents/dqn.py", line 228, in forward
    q_values = self.compute_q_values(state)
  File "/usr/local/lib/python2.7/dist-packages/rl/agents/dqn.py", line 69, in compute_q_values
    q_values = self.compute_batch_q_values([state]).flatten()
  File "/usr/local/lib/python2.7/dist-packages/rl/agents/dqn.py", line 64, in compute_batch_q_values
    q_values = self.model.predict_on_batch(batch)
  File "/usr/local/lib/python2.7/dist-packages/keras/engine/training.py", line 1276, in predict_on_batch
    x, _, _ = self._standardize_user_data(x)
  File "/usr/local/lib/python2.7/dist-packages/keras/engine/training.py", line 754, in _standardize_user_data
    exception_prefix='input')
  File "/usr/local/lib/python2.7/dist-packages/keras/engine/training_utils.py", line 126, in standardize_input_data
    'with shape ' + str(data_shape))
ValueError: Error when checking input: expected input_1 to have 4 dimensions, but got array with shape (1, 1, 3, 144, 256)
today
  • 32,602
  • 8
  • 95
  • 115
user10193823
  • 41
  • 1
  • 4

1 Answers1

1

The input shape of a Conv2D layer is (num_channels, width, height). So you should not add another dimension (actually the input shape is (batch_size, num_channels, width, height) but you don't need to set batch_size here; it will be set in fit method). Just pass input_shape=env.shape to Conv2D and it would work fine.

Edit:

Why do you define an Input layer and pass it to the model? That's not how it works. First, you need to compile the model using compile method, then train it on the training data using fit method and then use predict method to make predictions. I highly recommend to read the official guide to find out how these things work.

today
  • 32,602
  • 8
  • 95
  • 115
  • Irrespective of I set the batch_size or not, I get the same type of error. When i pass the "env.shape" as is, i get a similar error "ValueError : Error when checking input: expected input_1 to have 4 dimensions, but got array with shape (1,1,3,144,256)" However, this time the error is coming from Keras-RL. – user10193823 Aug 14 '18 at 16:38
  • @user10193823 That's not the same error. It concerns the `Input` layer as mentioned in the error. You have inconsistent shapes. You should post the complete code. – today Aug 14 '18 at 17:24
  • @today- The reason why I do it is that I have multiple inputs and one of the input is an RGB image. The other includes other measurement vectors. I use Model where I pass rgb_input along with other measurements. I followed the example that is posted on this link for my use case: https://gist.github.com/bklebel/913d8f155e6ed23f8a35fba989c70140 – user10193823 Aug 14 '18 at 18:11