OpenAI Gym is a platform for reinforcement learning research that aims to provide a general-intelligence benchmark with a wide variety of environments.
Questions tagged [openai-gym]
1033 questions
4
votes
1 answer
What are the Discrete and Box datatypes used by OpenAi's Gym?
They both seem like matrix/arrays.
I'm not much of a python guy, are these generic datatypes used within python or specific to the gym?
I'm reading through API and still confused on what these actually are.
For example (from the…

Tobiq
- 2,489
- 19
- 38
4
votes
0 answers
Python pyglet RGB image is just black
I want to display a numpy matrix as image in pyglet. I know that I have to use the ImageData interface for that, but when I do that, the image that is shown is just plain black. The same thing happens if I load an image with the Image interface, the…

Jan B.
- 77
- 1
- 7
4
votes
0 answers
What is the correct way to pass training data into a custom openai-gym environment?
I am creating a custom gym environment, similar to this trading one or this soccer one. The custom environment is being set up to train a PPO reinforcement learning model using stable-baselines.
My issue is, the time it takes between batch updates…

PyRsquared
- 6,970
- 11
- 50
- 86
4
votes
0 answers
Random seeding in open AI gym
I have a question about seeding in open AI gym and utilizing it in custom environments.
Let's take the lunar lander environment for example, the default seeding function is:
def seed(self, seed=None):
self.np_random, seed =…

PySeeker
- 818
- 8
- 12
4
votes
1 answer
Cartpole-v0 loss increasing using DQN
Hi I'm trying to train a DQN to solve gym's Cartpole problem.
For some reason the Loss looks like this (orange line). Can y'all take a look at my code and help with this? I've played around with the hyperparameters a decent bit so I don't think…

Alex
- 159
- 3
- 16
4
votes
3 answers
Getting " AttributeError: 'ImageData' object has no attribute 'data' " in headless gym jupyter Python 2.7
I am trying to run gym in headless server and render the same in jupyter. Python version 2.7.
I have started the jupyter using xvfb-run -a -s "-screen 0 1400x900x24" jupyter notebook
Below is the Jupyte cell that I run.
import matplotlib.pyplot as…

Loganathan
- 903
- 2
- 10
- 23
4
votes
1 answer
OpenAI gym render OSError
I am trying to learn Q-Learning by using OpenAI's gym module. But when I try to render my environment, I get the following error,
OSError Traceback (most recent call last)
in…

Vinay Bharadhwaj
- 165
- 1
- 17
4
votes
1 answer
Continuous DDPG doesn't seem to converge on a two-dimensional spatial search problem ("Hunt the Thimble")
I attempted to use continuous action-space DDPG in order to solve the following control problem. The goal is to walk towards an initially unknown position within a bordered, two-dimensional area by being told how far one is from the target position…

a_guest
- 34,165
- 12
- 64
- 118
4
votes
1 answer
How do apply Q-learning to an OpenAI-gym environment where multiple actions are taken at each time step?
I have successfully used Q-learning to solve some classic reinforcement learning environments from OpenAI Gym (i.e. Taxi, CartPole). These environments allow for a single action to be taken at each time step. However I cannot find a way to solve…

Pierre
- 41
- 3
4
votes
1 answer
How do I modify the gym's environment CarRacing-v0?
I was wondering if anyone knows if there is a tutorial or any information about how to modify the environment CarRacing-v0 from openai gym, more exactly how to create different roads, I haven't found anything about it.
What I want to do is to create…

Mike W
- 1,303
- 1
- 21
- 31
4
votes
2 answers
How can the FrozenLake OpenAI-Gym environment be solved with no intermediate rewards?
I'm looking at the FrozenLake environments in openai-gym. In both of them, there are no rewards, not even negative rewards, until the agent reaches the goal. Even if the agent falls through the ice, there is no negative reward -- although the…

RussAbbott
- 2,660
- 4
- 24
- 37
4
votes
3 answers
No module named 'atari_py' after installation
I am currently trying to use the Atari module for gym/openai. I have successfully managed to install the dependency.
Patricks-MacBook-Pro:~ patrickmaynard$ python3.6 -m pip install gym[atari]
Requirement already satisfied: gym[atari] in…

Patrick Maynard
- 314
- 3
- 18
4
votes
0 answers
OpenAI gym installing error
I've tried to install OpenAi Gym on Windows with pip, but 2 errors raised.
First I cloned the repository and executed:
git clone https://github.com/openai/gym.git
cd gym
pip install -e .
Until here all good. I can test the first environments.
But…

Salvador Vigo
- 397
- 4
- 16
4
votes
2 answers
How to solve "Env not found" error in OpenAI Gym?
I am using gym version - '0.9.7', and mujoco_py version 1.50.1.41, Python 3.6.1 |Anaconda 4.4.0, installed on a mac.
When trying:
import gym
env = gym.make('Humanoid-v1')
I am getting the following error:
Traceback (most recent call last):
File…

avithecatese
- 63
- 1
- 2
- 5
4
votes
0 answers
Explaining environments in Roboschool Half-Cheetah
I have some questions regarding the roboschool Half-Cheetah.
I see that the observation space for Half-Cheetah is 26. Can anyone tell me what is each value for?- I only counted 18. (also, some of the values seem to remain 0 for all timesteps)
In…

Claire
- 41
- 1
- 5