1

RLlib (version 0.7.3) is provided with the observation shape of Box(10, 3), which I wanted to use with a FCN agent. But the library seems to add another dimension to it. Because of this addition, RLlib tries to use a vision network for the agent.

How can I use this with an FCN agent.

At line 108 of file ray/rllib/policy/dynamic_tf_policy.py.

    if existing_inputs is not None:
        obs = existing_inputs[SampleBatch.CUR_OBS]
        if self._obs_include_prev_action_reward:
            prev_actions = existing_inputs[SampleBatch.PREV_ACTIONS]
            prev_rewards = existing_inputs[SampleBatch.PREV_REWARDS]
    else:
        obs = tf.placeholder(
            tf.float32,
            shape=[None] + list(obs_space.shape), # <----------------
            name="observation")
        if self._obs_include_prev_action_reward:
            prev_actions = ModelCatalog.get_action_placeholder(
                action_space)
            prev_rewards = tf.placeholder(
                tf.float32, [None], name="prev_reward")

    self.input_dict = {
        SampleBatch.CUR_OBS: obs,
        SampleBatch.PREV_ACTIONS: prev_actions,
        SampleBatch.PREV_REWARDS: prev_rewards,
        "is_training": self._get_is_training_placeholder(),
    }

0 Answers0