2

I have created a custom space, which extends the OpenAI gym.Space. I need this space because I need an action space that sums up to a value. Using this, I can scale up the output and meet my requirement.

class ProbabilityBox(Space):
    """
        Values add up to 1 and each value lies between 0 and 1
    """
    def __init__(self, size=None):
        assert isinstance(size, int) and size > 0
        self.size = size
        gym.Space.__init__(self, (), np.int64)

    def sample(self):
        return np.around(np.random.dirichlet(np.ones(self.size), size=1), decimals=2)[0]

    def contains(self, x):
        if isinstance(x, (list, tuple, np.ndarray)):
            if np.sum(x) != 1:
                return False
            
            for i in x:
                if i > 1 or i < 0:
                    return False
            
            return True
        else:
            return False

    def __repr__(self):
        return f"ProbabilityBox({self.size})"

    def __eq__(self, other):
        return self.size == other.size

I am using this space in an action space in a custom environment. I am unable to train this agent in stable-baselines3 because it does not support custom spaces.

  1. Is there an alternate way to model this scenario so that I can work with stable-baselines3?
  2. What other libraries/frameworks can I use to train an RL agent that supports custom spaces?
Kranthi S
  • 125
  • 1
  • 5

2 Answers2

0

Stable baselines does support custom envs. See docs.

General Grievance
  • 4,555
  • 31
  • 31
  • 45
khev
  • 1
0

stable-baselines 3 does not support action spaces different from Dicrete / MultiDiscrete / Box and there is absolutely no need in custom action spaces, because your action(s) is (are) fully determined by an output of your neural network being consequently either a natural/real number or a vector of them, so that three above mentioned classes fully cover them

gehirndienst
  • 424
  • 2
  • 13