OpenAI Gym is a platform for reinforcement learning research that aims to provide a general-intelligence benchmark with a wide variety of environments.
Questions tagged [openai-gym]
1033 questions
4
votes
2 answers
Low GPU utilisation when running Tensorflow
I've been doing Deep Reinforcement Learning using Tensorflow and OpenAI gym. My problem is low GPU utilisation. Googling this issue, I understood that it's wrong to expect much GPU utilisation when training small networks ( eg. for training mnist).…

Nilesh PS
- 356
- 3
- 8
4
votes
1 answer
why do keras-rl examples always choose linear activation in the output layer?
I'm a complete newbie to Reinforcement Learning. And I have a question about the choice of the activation function of the output layer for the keras-rl agents. In all the examples provided by keras-rl…

uruz7_arx8
- 43
- 4
4
votes
3 answers
OpenAI gym: How to get complete list of ATARI environments
I have installed OpenAI gym and the ATARI environments. I know that I can find all the ATARI games in the documentation but is there a way to do this in Python, without printing any other environments (e.g. NOT the classic control environments)

Toke Faurby
- 5,788
- 9
- 41
- 62
4
votes
1 answer
Create Browser Environment in OPENAI UNIVERSE
How to create a new environment in OPEN AI Universe to use my Website to perform actions ?
I tried with the DUSK GAME. It works good.

Nandakumar
- 1,071
- 2
- 11
- 30
4
votes
1 answer
Openai-gym : Setting is_slippery=False in FrozenLake-v0
In openai-gym, I want to make FrozenLake-v0 work as deterministic problem. So, I need to set variable is_slippery=False.
How can I set it to False while initializing the environment?
Reference to variable in official code

Prabhat Doongarwal
- 176
- 3
- 15
3
votes
0 answers
Stablebaselines3 and Pettingzoo
I am trying to understand how to train agents in a pettingzoo environment using the single agent algorithm PPO implemented in stablebaselines3.
I'm following this tutorial where the agents act in a cooperative environment and they are all trained…

Onil90
- 171
- 1
- 8
3
votes
1 answer
"ValueError: setting an array element with a sequence" when trying to train model with OpenAI Gym
I'm trying to train RL-agent to play Car Racing environment with OpenAI Gym and been using following code:
import gym
from stable_baselines3 import PPO
from stable_baselines3.common.vec_env import DummyVecEnv
from stable_baselines3.common.evaluation…

rjuri
- 133
- 7
3
votes
2 answers
gym_super_mario_bros (7.3.0) - ValueError: not enough values to unpack (expected 5, got 4)
I'm running Python3 (3.8.10) and am attempting a tutorial with the gym_super_mario_bros (7.3.0) and nes_py libraries. I followed various tutorials code and tried on multiple computers but get an error. I have tried to adjust some of the parameters…

evenspaghetti
- 33
- 3
3
votes
2 answers
Dict Observation Space for Stable Baselines3 Not Working
I've created a minimal reproducible example below, this can be run in a new Google Colab notebook for ease. Once the first install finishes, just Runtime > Restart and Run All for it to take effect.
I've made a simple roulette game environment below…

wildcat89
- 1,159
- 16
- 47
3
votes
3 answers
AssertionError: Something went wrong with pygame. This should never happen.(When import gym)
I tried to import gym as follow:
import gym
env = gym.make("Taxi-v3")
env.reset()
env.render()
then the compiler says that pygame was missing. So I installed pygame and rerun the code and got the…

user42493
- 813
- 4
- 14
- 34
3
votes
0 answers
Sprite not being properly drawn to where rect is
I created a simple game in pygame that is sort of a simple bullet-hell style game, with the goal of creating a deep reinforcement learning agent to learn the game. I got the game to work in pygame alone using keyboard controls, and I am now working…

jkcarney
- 31
- 2
3
votes
1 answer
Error while defining observation space in gym custom environment
I am working on a reinforcement algorithm, I am very new to this and trying to get a hold of things.
Player1Env looks upon a 7x6 Connect4 playing grid. I am initializing the class as follows:
def __init__(self):
super(Player1Env,…

Helusio
- 31
- 1
- 4
3
votes
0 answers
REINFORCE for Cartpole: Training Unstable
I am implementing REINFORCE for Cartpole-V0. However, the training process is very unstable. I have not implemented `early-stopping' for the environment and allow training to continue for a fixed (high) number of episodes. After a few thousand…

204
- 433
- 1
- 5
- 19
3
votes
1 answer
How to install mujoco-py on windows?
I tried running the following code to test the HalfCheetah-v2 environment:
import gym
env = gym.make('HalfCheetah-v2')
But this gives me the following error:
ModuleNotFoundError: No module named 'mujoco_py'
During handling of the above exception,…

mac179
- 1,540
- 1
- 14
- 24
3
votes
0 answers
Learning rate scheduler in DQN within stable_baselines3
I'm experimenting with Reinforcement Learning using gym and stable-baselines3, particularly using the DQN implementation of stable-baselines3 for the MountainCar (https://gym.openai.com/envs/MountainCar-v0/).
I'm trying to implement a learning rate…