0

I'm currently working on a reinforcement learning model, and have come across an issue while trying to create a DQN to work within my custom environment.

While instantiating the DQN agent with this line:

dqn = DQNAgent(model=model, memory=memory, policy=policy,
                   nb_actions=(None,actions), nb_steps_warmup=10, target_model_update=1e-2)

Note that actions = 3 (integer).

I get the error code:

raise ValueError(f'Model output "{model.output}" has invalid shape. DQN expects a model that has one dimension for each action, in this case {self.nb_actions}.')
ValueError: Model output "Tensor("dense_2/BiasAdd:0", shape=(?, 3), dtype=float32)" has invalid shape. DQN expects a model that has one dimension for each action, in this case 3

After digging into the DQN files, I noticed that the error is arising by the fact that

(?,3) != (None,3)

From my understanding the question mark is simply a placeholder to represent an unknown amount of data points. So why does it have a problem with None not being equal to it and how do I fix this?

Thanks

1 Answers1

1

I believe you are using the keras-rl library.

The nb_actions parameter should be an integer that specifies the number of total possible actions that you can perform in your environment.

Taking the Taxi-v2 environment from OpenAI Gym as an example, we can obtain the number of total possible actions via:

import gym
ENV_NAME = "Taxi-v2"
env = gym.make(ENV_NAME)
nb_actions = env.action_space.n

If your actions variable is the number of actions, you can try:

dqn = DQNAgent(model=model, memory=memory, policy=policy, nb_actions=actions, nb_steps_warmup=10, target_model_update=1e-2)
n2my
  • 53
  • 6