1

I'm using one of Keras's deep q-learning agents: DQNAgent. When I pass my environment into DQNAgent.fit, I receive the following error:

**3 dqn.fit(env, nb_steps=50000, visualize=False, verbose=1)**

/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/training_utils_v1.py in standardize_input_data(data, names, shapes, check_batch_axis, exception_prefix)

655                            ': expected ' + names[i] + ' to have ' +
656                            str(len(shape)) + ' dimensions, but got array '
**657                            'with shape ' + str(data_shape))**
658         if not check_batch_axis:
659           data_shape = data_shape[1:]

ValueError: Error when checking input: expected dense_18_input to have 2 dimensions, but got array with shape (1, 1, 65)

My environment's states and spaces are defined as follows:

self.state = np.zeros(65, dtype=int)
self.action_space = spaces.Tuple((spaces.Discrete(64), spaces.Discrete(64)))
self.observation_space = spaces.Box(low=0, high=16, shape=(65,), dtype=np.int)

and I'm using the following model:

states = env.observation_space.shape
actions = 64**2
def build_model(states, actions):
    model = Sequential()    
    model.add(Dense(100, activation='relu', input_shape=states))
    model.add(Dense(200, activation='relu'))
    model.add(Dense(actions, activation='linear'))
    return model

My environment's state vector is of shape (65,), but the fit method beefs it up to (1, 1, 65)--causing a shape mismatch. To be clear, self.state is returned as the observation from the environment. Does anyone know why this is happening?

2 Answers2

2

First of all, when You specify the input of a model, Keras will add another dimension, because it expects a Batch. For example:

input_shape=(65,) --> (None, 65)

So, when you forward a single observation into your model, then Keras assumes batch_size=1. For that reason, your input size becomes:

(None, 65) --> (1,65)

Now, in order to get an input with shape (1,1,65) it means that You fed and observation with size batch_size + (1,65) = (1,1,65). Which means that for some reason Your observation is transposed (reshaped), before actually fed into the network.

Did You check the observation shape before feeding it to network?

LazyAnalyst
  • 426
  • 3
  • 16
  • 1
    The user who asked this question handed this following question: Do you have any idea what might cause this issue and how I may be able so solve it? I have pretty much the same problem- only differance is that I initialize the observation_space as follows: `self.observation_space = Box(low=np.zeros(85), high=np.ones(85), dtype=np.uint8)`. Any help would be appreciated. – Tom Danilov Jan 21 '22 at 21:05
  • Yes, check the shape that the agent is receiving from the environment. You could do something like: `observation = env.reset()` and then `print(observation.shape, observation.dtype)` – LazyAnalyst Jan 23 '22 at 09:35
0

I ran thought the same issue. It occurs that Dqn agent is forcing to add two dimensions to your observation shape. Precisely at line 68 of the dqn.py file(for the last dimension added). I tried to remove it, but I ran to other issue since the last dimension is used to pass the batch size.

Say you want to train your agent with a batch of size 32, then a tensor of shape (32,1,65) is going to be passed to the network.

The dimension one define the window lenght.