I am trying to pass a RGB image from a simulator into my custom neural network. At the source of the RGB generation (simulator), the dimension of RGB image is (3,144,256)
.
This is how I construct the neural network:
rgb_model = Sequential()
rgb = env.shape() // this is (3, 144, 256)
rgb_shape = (1,) + rgb
rgb_model.add(Conv2D(96, (11, 11), strides=(3, 3), padding='valid', activation='relu', input_shape=rgb_shape, data_format = "channels_first"))
Now, my rbg_shape is (1, 3, 144, 256).
This is the error I get:
rgb_model.add(Conv2D(96, (11, 11), strides=(3, 3), padding='valid', activation='relu', input_shape=rgb_kshape, data_format = "channels_first"))
File "/usr/local/lib/python2.7/dist-packages/keras/engine/sequential.py", line 166, in add
layer(x)
File "/usr/local/lib/python2.7/dist-packages/keras/engine/base_layer.py", line 414, in call
self.assert_input_compatibility(inputs)
File "/usr/local/lib/python2.7/dist-packages/keras/engine/base_layer.py", line 311, in assert_input_compatibility
str(K.ndim(x)))
ValueError: Input 0 is incompatible with layer conv2d_1: expected ndim=4, found ndim=5
Why is keras complaining that the expected dimension is 5 when my actual dimension is 4?
P.S: I have the same question as this question. I ideally wanted to comment on that post but don't have enough reputation.
Edit:
Here is the code that deals with the error:
rgb_shape = env.rgb.shape
rgb_model = Sequential()
rgb_model.add(Conv2D(96, (11, 11), strides=(3, 3), padding='valid', activation='relu', input_shape=rgb_shape, data_format = "channels_first"))
rgb_model.add(Conv2D(128, (3, 3), strides=(2, 2), padding='valid', activation='relu', data_format = "channels_first" ))
rgb_model.add(Conv2D(384, (3, 3), strides=(1, 1), padding='valid', activation='relu', data_format = "channels_first"))
rgb_model.add(Conv2D(384, (3, 3), strides=(1,1), padding='valid', activation='relu', data_format = "channels_first"))
rgb_model.add(Conv2D(256, (3,3), strides=(1,1), padding='valid', activation='relu', data_format = "channels_first"))
rgb_model.add(Flatten())
rgb_input = Input(shape=rgb_shape)
rgb = rgb_model(rgb_input)
This is the new error when I pass env.rgb.shape
as input_shape
in Conv2D
:
dqn.fit(env, callbacks=callbacks, nb_steps=250000, visualize=False, verbose=0, log_interval=100)
File "/usr/local/lib/python2.7/dist-packages/rl/core.py", line 169, in fit
action = self.forward(observation)
File "/usr/local/lib/python2.7/dist-packages/rl/agents/dqn.py", line 228, in forward
q_values = self.compute_q_values(state)
File "/usr/local/lib/python2.7/dist-packages/rl/agents/dqn.py", line 69, in compute_q_values
q_values = self.compute_batch_q_values([state]).flatten()
File "/usr/local/lib/python2.7/dist-packages/rl/agents/dqn.py", line 64, in compute_batch_q_values
q_values = self.model.predict_on_batch(batch)
File "/usr/local/lib/python2.7/dist-packages/keras/engine/training.py", line 1276, in predict_on_batch
x, _, _ = self._standardize_user_data(x)
File "/usr/local/lib/python2.7/dist-packages/keras/engine/training.py", line 754, in _standardize_user_data
exception_prefix='input')
File "/usr/local/lib/python2.7/dist-packages/keras/engine/training_utils.py", line 126, in standardize_input_data
'with shape ' + str(data_shape))
ValueError: Error when checking input: expected input_1 to have 4 dimensions, but got array with shape (1, 1, 3, 144, 256)