1

I would like to train a DQN Agent with Keras-rl. My environment has both multi-discrete action and observation spaces. I am adapting the code of this video: https://www.youtube.com/watch?v=bD6V3rcr_54&t=5s

Then, I am sharing my code

class ShowerEnv(Env):
    def __init__(self, max_machine_states_vec, production_rates_vec, production_threshold, scheduling_horizon, operations_horizon = 100):
        """
        Returns:
        self.action_space is a vector with the maximum production rate fro each machine, a binary call-to-maintenance and a binary call-to-schedule
        """

        num_machines = len(max_machine_states_vec)
        assert len(max_machine_states_vec) == len(production_rates_vec), "Machine states and production rates have different cardinality"
        # Actions we can take, down, stay, up
        self.action_space = MultiDiscrete(production_rates_vec + num_machines*[2] + [2]) ### Action space is the production rate from 0 to N and the choice of scheduling
        # Temperature array
        self.observation_space = MultiDiscrete(max_machine_states_vec + [scheduling_horizon+2]) ### Observation space is the 0,...,L for each machine + the scheduling state including "ns" (None = "ns")
        # Set start temp
        Code going on...
.
.
.
.
def build_model(states, actions):
    actions_number = reduce(lambda a,b: a*b, env.action_space.nvec)
    model = Sequential()    
    model.add(Dense(24, activation='relu', input_shape= (1, states[0]) ))
    model.add(Dense(24, activation='relu'))
    model.add(Dense(actions_number, activation='linear'))
    return model

def build_agent(model, actions):
    policy = BoltzmannQPolicy()
    memory = SequentialMemory(limit=50000, window_length=1)
    dqn = DQNAgent(model=model, memory=memory, policy=policy, 
                nb_actions=actions, nb_steps_warmup=10, target_model_update=1e-2)
    return dqn
.
.
.
.
states = env.observation_space.shape
actions_number = reduce(lambda a,b: a*b, env.action_space.nvec)

model = build_model(states, actions)
model.summary()

dqn = build_agent(model, actions)
dqn.compile(Adam(lr=1e-3), metrics=['mae'])
dqn.fit(env, nb_steps=50000, visualize=False, verbose=1)

After initializing with 2 elements, so 5 actions, I get the following error:

ValueError: Model output "Tensor("dense_2/BiasAdd:0", shape=(None, 1, 32), dtype=float32)" has invalid shape. DQN expects a model that has one dimension for each action, in this case [2 2 2 2 2]

How can I solve this. I am quite sure because I do not fully understand how to adapt the code in the video to a MultiDiscrete action space. Thanks :)

mercury24
  • 53
  • 1
  • 9

1 Answers1

1

I had the same problem, unfortunately it's impossible to use gym.spaces.MultiDiscrete with the DQNAgent in Keras-rl.

Solution:

Use the library stable-baselines3 and use the A2C agent. It's very easy to implement it.

Tyler2P
  • 2,324
  • 26
  • 22
  • 31
Said Amz
  • 112
  • 3
  • Yes, I overcome this bug but then I found an error saying that I give to the model a tensor of shape (1,1,3) that I do not create, so actually it is DQNAgent that does like this. Thanks for the suggestion :) – mercury24 Feb 01 '22 at 16:33
  • Hi :) when working with stable-baseline, did you encounter the PyTorch error "RuntimeError: Class values must be smaller than num_classes."? – mercury24 Feb 02 '22 at 14:33
  • Hi ! No I have never encountered this error. If you use the code from the stabe-baselines website and it doesn't work, make sure that you have the latest versions of libraries and try to uninstall stable baselines3 and install it with : ```pip install stable-baselines3[extra] ``` . :) – Said Amz Feb 02 '22 at 18:16
  • 1
    @SaidAmz +1 Using a custom gym environment with gym.spaces.MultiDiscrete still yields `RuntimeError: Class values must be smaller than num_classes.` Library was uninstalled and re-installed in a separate environment. – J. M. Arnold Sep 18 '22 at 22:42