2

I am trying to set up a custom multi-agent environment using RLlib, but either I am using the once available online or I am making one, I am being encountered by the same errors as mentioned below. Please help me out. I have installed whatever they have asked from me in step (a) I am registering my environment using

def env_creator(env_config):
    return SimpleCorridor(env_config)

register_env("corridor", env_creator)


if __name__ == "__main__":
    ray.shutdown()
    ray.init()
    tune.run(
        "PPO",
        stop={
            "timesteps_total": 10000,
        },
        config={
            "env": "corridor", # <--- This works fine!
            "env_config": {
                "corridor_length": 5,
            },
        },
    )


(pid=266728) Try one of the following:
(pid=266728) a) For Atari support: `pip install gym[atari] atari_py`.
(pid=266728)    For VizDoom support: Install VizDoom
(pid=266728)    (https://github.com/mwydmuch/ViZDoom/blob/master/doc/Building.md) and
(pid=266728)    `pip install vizdoomgym`.
(pid=266728)    For PyBullet support: `pip install pybullet`.
(pid=266728) b) To register your custom env, do `from ray import tune;
(pid=266728)    tune.register('[name]', lambda cfg: [return env obj from here using cfg])`.
(pid=266728)    Then in your config, do `config['env'] = [name]`.
(pid=266728) c) Make sure you provide a fully qualified classpath, e.g.:
(pid=266728)    `ray.rllib.examples.env.repeat_after_me_env.RepeatAfterMeEnv`

Is there something else I should be taking care of? This is just a basic environment from the examples I saw. Even the environment I am customizing is facing the same problem. I have initialized the observation space as a tuple so I am not able to use stable baselines for evaluation.

Please Please help me out.

0 Answers0