Is there any OpenAI Gym compliant interface implementation for continuous action spaces? If so, does it support multi-agent environments? I'm working on multi-agent DDPG implementation, but I couldn't find the suitable baseline environment.
1 Answers
Multi-Agent RL in Gym
OpenAI Gym does not provide a nice interface for Multi-Agent RL environments, however, it is quite easy to adapt the standard gym interface by having
env.step(action_n: List) -> observation_n: List
taking a list of actions corresponding to each agent and outputting a list of observations, one for each agent.
If you are reimplementing MADDPG, you could also use the implementation of the multi-agent particle environments provided by Ryan Lowe himself.
Of course, having a reimplementation of the environments won't hurt.
Continuous Action Spaces
In the linked implementation of the multi-agent particle environments, you can change the type of action-space from discrete to continuous by changing this line of code to False.
However, having tried this before, I can tell you that this will result in a worse performance of MADDPG.

- 660
- 3
- 19