OpenAI Gym is a platform for reinforcement learning research that aims to provide a general-intelligence benchmark with a wide variety of environments.
Questions tagged [openai-gym]
1033 questions
7
votes
1 answer
Difficulties with AI-Gym Python graphics in Jupyter notebooks
I am trying to get AI-Gym demos to display in Jupyter notebooks. I get good results for the Atari demo Breakout-v0 and a difficult error message for the cart-pole demo CartPole-v0. Both work fine outside notebooks. The following are the minimal…

Reb.Cabin
- 5,426
- 3
- 35
- 64
7
votes
0 answers
How can I properly run OpenAI gym with nvidia-docker and see the environments
So I'm trying set run OpenAI gym in a docker container, but it looks like this:
Notice the pong window has a weird render issue where it's repeating things and the colors are off. Here is space invaders:
NOTE FOR "NOT A PROGRAMMING ISSUE" PEOPLE:…

AwokeKnowing
- 7,728
- 9
- 36
- 47
7
votes
2 answers
ffmpeg is not being detected by Spyder
Running almost every code from OpenAi gym in spyder by Anaconda (for instance this code: https://gym.openai.com/evaluations/eval_y5dnhk0ZSMqlqJKBz5vJQw )
I run into the following error message:
DependencyNotInstalled: Found neither the ffmpeg nor…

Massyanya
- 2,844
- 8
- 28
- 37
6
votes
0 answers
How to create an openAI gym Observation space using grid environment
I have build environment in Tkinter, how to create observation space using this environment. I could not understand how to use grid coordinates in array to make observation space, self.observation_space = spaces.Box(np.array([]), np.array([]),…

zoraiz ali
- 77
- 5
6
votes
2 answers
OpenAI Gym environment cannot be loaded in Google Colab
I'm trying to train a DQN in google colab so that I can test the performance of the TPU. Unfortunately, I get the following error:
import gym
env = gym.make('LunarLander-v2')
AttributeError: module 'gym.envs.box2d' has no attribute 'LunarLander'
I…

spadel
- 998
- 2
- 16
- 40
6
votes
1 answer
How to check the actions available in OpenAI gym environment?
When using OpenAI gym, after importing the library with import gym, the action space can be checked with env.action_space. But this gives only the size of the action space. I would like to know what kind of actions each element of the action space…

user12394113
- 381
- 3
- 13
6
votes
1 answer
Start OpenAI gym on arbitrary initial state
Anybody knows any OpenAI Gym environments where we can set the initial state of the game? For example, I found the MountainCarContinuous-v0 can do such thing so that we can select at which point the car starts. However, I am looking for another more…

Student
- 63
- 4
6
votes
1 answer
Python Reinforcement Learning - Tuple Observation Space
I've created a custom openai gym environment with a discrete action space and a somewhat complicated state space. The state space has been defined as a Tuple because it combines some dimensions which are continuous and others which are…

Jeff
- 316
- 2
- 9
6
votes
1 answer
What does spaces.Discrete mean in OpenAI Gym
I try to learn MC- Monte Carlo Method applied in blackjack using openAI Gym. And I do not understand these lines:
def __init__(self, natural=False):
self.action_space = spaces.Discrete(2)
self.observation_space = spaces.Tuple((
…

doob
- 77
- 1
- 1
- 5
6
votes
2 answers
How can I start the environment from a custom initial state for Mountain Car?
I want to start the continuous Mountain Car environment of OpenAI Gym from a custom initial point. The OpenAI Gym does not provide any method to do that. I looked into the code of the environment and found out that there is an attribute state which…

Mr. Nobody
- 185
- 11
6
votes
1 answer
Register OpenAI Gym malformed environment failure
On a Linux PC, I am attempting to create a custom open AI Gym environment. I can get through all of the steps from a blog write up from medium.com including the pip install -e . but I get an error with the final product making the environment env =…

bbartling
- 3,288
- 9
- 43
- 88
6
votes
1 answer
What does gym.make('CartPole-v0') return and how it does it work?
I know env=gym.make('CartPole-v0') is of type gym.wrappers.time_limit.TimeLimit
And I also know env is an "instance" of the class cartpole.py. My question is how, by just giving the name 'CartPole-v0', I got the access to the cartpole.py class.…

Diego Orellana
- 994
- 1
- 9
- 20
6
votes
1 answer
Why does the CartPole-v0 reset after 200 steps?
I was working on CartPole-v0 provided by openai gym. I noticed that my program always resets after 200 steps. If I sum all the rewards from an episode, where the maximum reward is 1.0 for each timestep, I never get more than 200. I was wondering if…

Abel
- 77
- 1
- 4
6
votes
4 answers
How to interpret the observations of RAM environments in OpenAI gym?
In some OpenAI gym environments, there is a "ram" version. For example: Breakout-v0 and Breakout-ram-v0.
Using Breakout-ram-v0, each observation is an array of length 128.
Question: How can I transform an observation of Breakout-v0 (which is a 160…

Victor
- 2,521
- 2
- 11
- 8
6
votes
2 answers
Is there a way to implement an OpenAI's environment, where the action space changes at each step?
Is there a way to implement an OpenAI's environment, where the action space changes at each step?

Abhishek Bhatia
- 9,404
- 26
- 87
- 142