0

We have built a system consisting of Docker containers, each running Ray. One container takes on the role of head and the others as workers. Is there a way to run our custom env's steps in parallel, while one env per worker per container is running? The methods mentioned in Rays documentation (https://ray.readthedocs.io/en/latest/rllib-env.html?highlight=remote_worker_envs#vectorized) aren't useful for us, because we want one env in each worker.

1 Answers1

0

One env per worker is the default setting. You can increase the number of workers by increasing num_workers.

There is also the remote_worker_envs setting, which will run envs in separate actors but the policy networks in one actor for inference. However this has higher communication overhead than just increasing num_workers and is not recommended.

Eric
  • 26
  • 1
  • We are using 3 workers, but the problem is the envs they run are not run in parallel, At one point in time only one step() function is running. We used the `remote_worker_envs` setting, but that only works when there are multiple envs per worker, but we can't do that. [The exact error](https://github.com/ray-project/ray/blob/master/python/ray/rllib/env/base_env.py#L87) – Levi Németh Apr 03 '19 at 10:48