Questions tagged [openai-gym]

OpenAI Gym is a platform for reinforcement learning research that aims to provide a general-intelligence benchmark with a wide variety of environments.

1033 questions
9
votes
3 answers

How should OpenAI environments (gyms) use env.seed(0)?

I've created a very simple OpenAI gym (banana-gym) and wonder if / how I should implement env.seed(0). See https://github.com/openai/gym/issues/250#issuecomment-234126816 for example.
Martin Thoma
  • 124,992
  • 159
  • 614
  • 958
9
votes
2 answers

Observations meaning - OpenAI Gym

I want to know the specification of the observation of CartPole-v0 in OpenAI Gym(https://gym.openai.com/). For example, in the following code outputs observation. One observation is like [-0.061586 -0.75893141 0.05793238 1.15547541] I want to…
8
votes
1 answer

Stable Baselines3 RuntimeError: mat1 and mat2 must have the same dtype

I am trying to implement SAC with a custom environment in Stable Baselines3 and I keep getting the error in the title. The error occurs with any off policy algorithm not just SAC. Traceback: File "\src\main.py", line 70, in…
Theo Michail
  • 157
  • 1
  • 1
  • 11
8
votes
3 answers

OpenAI Gym - AttributeError: module 'contextlib' has no attribute 'nullcontext'

I'm running into this error when trying to run a command from docker a docker container on google compute engine. Here's the stacktrace: Traceback (most recent call last): File "train.py", line 16, in from stable_baselines.ppo1 import…
8
votes
2 answers

rllib use custom registered environments

Rllib docs provide some information about how to create and train a custom environment. There is some information about registering that environment, but I guess it needs to work differently than gym registration. I'm testing this out working with…
KindaTechy
  • 1,041
  • 9
  • 25
8
votes
5 answers

How to set a openai-gym environment start with a specific state not the `env.reset()`?

Today, when I was trying to implement an rl-agent under the environment openai-gym, I found a problem that it seemed that all agents are trained from the most initial state: env.reset(), i.e. import gym env =…
Hu Xixi
  • 1,799
  • 2
  • 21
  • 29
8
votes
1 answer

openai gym box space configuration

I need an observation space ranging from [0,inf) I'm new to openai gym, and not sure what the format should be from gym spaces spaces.Box(np.array(0),np.array(np.inf)) # Box() spaces.Box(0, np.inf, shape = (1,)) # Box(1,)
Schalton
  • 2,867
  • 2
  • 32
  • 44
8
votes
2 answers

Is there a way to disable video rendering in OpenAI gym while still recording it?

Is there a way to disable video rendering in OpenAI gym while still recording it? When I use the atari environments and the Monitor wrapper, the default behavior is to not render the video (the video is still recorded and saved to disk). However in…
niko
  • 1,128
  • 1
  • 11
  • 25
8
votes
5 answers

OpenAI gym: How to get pixels in CartPole-v0

I would like to access the raw pixels in the OpenAI gym CartPole-v0 environment without opening a render window. How do I do this? Example code: import gym env = gym.make("CartPole-v0") env.reset() img = env.render(mode='rgb_array', close=True) #…
Toke Faurby
  • 5,788
  • 9
  • 41
  • 62
7
votes
1 answer

ImportError: cannot import 'rendering' from 'gym.envs.classic_control'

I'm working with RL agents, and was trying to replicate the finding of the this paper, wherein they make a custom parkour environment based on Gym open AI, however when trying to render this environment I run into. import numpy as np import…
Manu Dwivedi
  • 87
  • 1
  • 4
7
votes
1 answer

What is the action_space for?

I'm making custom environment in OpenAI Gym and really don't understand, what is action_space for? And what should I put in it? Just to be accurate, I don't know what is action_space, I didn't used it in any code. And I didn't find anything on…
Denis Boyko
  • 91
  • 1
  • 1
  • 5
7
votes
1 answer

'OSError: [WinError 126] The specified module could not be found' when using OpenAI Gym-Atari on Windows 10

I am just trying to execute this simple trial code: import gym env = gym.make('SpaceInvaders-v0') env.reset() for _ in range(1000): env.step(env.action_space.sample()) env.render('human') env.close() And I am getting an error that…
7
votes
1 answer

Gym (openAI) environment actions space depends from actual state

I'm using gym toolkit to create my own env and keras-rl to use my env within an agent. The problem is that my actions space changes, it depends from actual state. For example, i have 46 possible actions, but given a certain state only 7 are…
davide
  • 91
  • 7
7
votes
2 answers

Is there a way to slow down the game environment with gym’s OpenAI?

When I render an environment with gym it plays the game so fast that I can’t see what is going on. And it shouldn’t be a problem with the code because I tried a lot of different ones.
sprmbng
  • 71
  • 2
7
votes
1 answer

python OpenAI gym monitor creates json files in the recording directory

I am implementing value iteration on the gym CartPole-v0 environment and would like to record the video of the agent's actions in a video file. I have been trying to implement this using the Monitor wrapper but it generates json files instead of a…
1 2
3
68 69