I am working with ray.rllib and I am stuck with using a static method (line40) of vectorizing my custom environment and training it using PPOTrainer(). I use the existing_envs
parameter and hand over a list of gym.envs which I manually create. Is there any option for passing vec envs in PPOTrainer(), can anyone help me out with this?
So in short: how can I train PPO with ray.rllib when I created a Vectorized Env with static method (line40).