OpenAI Gym is a platform for reinforcement learning research that aims to provide a general-intelligence benchmark with a wide variety of environments.
Questions tagged [openai-gym]
1033 questions
5
votes
1 answer
Why multiprocessing in Stable Baselines 3 is slower?
I took multiprocessing example for Stable Baselines 3 and everything was fine.
https://colab.research.google.com/github/Stable-Baselines-Team/rl-colab-notebooks/blob/sb3/multiprocessing_rl.ipynb#scrollTo=pUWGZp3i9wyf
Multiprocessed training took…

Danilov Vladimir
- 51
- 3
5
votes
1 answer
ImportError: cannot import name 'Monitor' from 'gym.wrappers'
I have just created a new environment with gym installation. I am just getting started with Atari games but am getting an import error for my below code -
import gym
env = gym.make('FrozenLake-v1')
videosDir = './RL_videos'
env =…

Malgo
- 1,871
- 1
- 17
- 30
5
votes
0 answers
Alternatives of Stable Baselines3
can you suggest some alternative of stable baselines that I can use to train my agent in reinforcement learning.
P.s. I'm using gym mini-grid environment so tell me those who work in this environment.

Kunal Rawat
- 51
- 2
5
votes
2 answers
How to copy gym environment?
Info: I am using OpenAI Gym to create RL environments but need multiple copies of an environment for something I am doing. I do not want to do anything like [gym.make(...) for i in range(2)] to make a new environment.
Question: Given one gym env…
user12128336
5
votes
1 answer
How to solve UserWarning: Using a target size (torch.Size([])) that is different to the input size (torch.Size([1]))?
I am trying to run code from a book I purchased about reinforcement learning in Pytorch.
The code should work according to the book, but for me the model doesn't converge and the reward remains negative. It also get the following user…

N.W.
- 353
- 1
- 5
- 16
5
votes
2 answers
How to change certain values in a torch tensor based on an index in another torch tensor?
This is an issue I'm running while convertinf DQN to Double DQN for the cartpole problem. I'm getting close to figuring it out.
tensor([0.1205, 0.1207, 0.1197, 0.1195, 0.1204, 0.1205, 0.1208, 0.1199, 0.1206,
0.1199, 0.1204, 0.1205, 0.1199,…

DmiSH
- 85
- 1
- 1
- 4
5
votes
2 answers
Getting error: module 'gym' has no attribute 'make'
I am trying to run a basic OpenAI-gym program available on their OpenAI-gym's official documentation:
import gym
env = gym.make("CartPole-v1")
observation = env.reset()
for _ in range(1000):
env.render()
action = env.action_space.sample() # your…

Vardan Agarwal
- 2,023
- 2
- 15
- 27
5
votes
1 answer
Simulation of suicide burn in openai-gym's LunarLander
I want to simulate suicide burn to learn and understand rocket landing. OpenAI gym already has an LunarLander enviroment which is used for training reinforcement learning agents. I am using this enviroment to simulate suicide burn in python. I have…

Eka
- 14,170
- 38
- 128
- 212
5
votes
1 answer
Missing package to enable rendering OpenAI Gym in Colab
I'm attempting to render OpenAI Gym environments in Colab via a Mac using the StarAI code referenced in previous questions on this topic. However, it fails. The key error (at least the first error) is shown in full below, but the import part seems…

gblauer
- 147
- 2
- 6
5
votes
4 answers
how to fix environment error in open-ai gym?
code:
import gym
env = gym.make('Breakout-v0')
I get an error:
Traceback (most recent call last):
File "C:/Users/danie/Downloads/Programming/Python/Programming/Pycharm/app.py", line 40, in
gym.make("Breakout-v0")
File…

daniel
- 73
- 1
- 6
5
votes
1 answer
OpenAI Gym custom environment: Discrete observation space with real values
I would like to create custom openai gym environment that has discrete state space, but with float values. To be more precise, it should be a range of values with 0.25 step:
10.0, 10.25, 10.5, 10.75, 11.0, ..., 19.75, 20.0
Is there a way to do this…

sesli
- 53
- 1
- 3
5
votes
2 answers
Specify rendering window size of OpenAi Gym
Calling env.render() always renders a windows filling the whole screen.
env = gym.make('FetchPickAndPlace-v1')
env.reset()
for i in range(1000):
env.render()

Moritz Blum
- 93
- 7
5
votes
3 answers
Tensorflow: Different results with the same random seed
I'm running a reinforcement learning program in a gym environment(BipedalWalker-v2) implemented in tensorflow. I've set the random seed of the environment, tensorflow and numpy manually as…

Maybe
- 2,129
- 5
- 25
- 45
5
votes
2 answers
Setting up OpenAI Gym on Windows 10
I'm trying to set up OpenAI's gym on Windows 10, so that I can do machine learning with Atari games.
On PyCharm I've successfully installed gym using Settings > Project Interpreter. But when I try to set up a breakout environment (or any other Atari…

Paul K
- 51
- 1
- 3
5
votes
4 answers
Failed building wheel for mujoco-py with OpenAI Gym
I followed the installation instructions for OpenAI Gym, but the full install gives the error "Failed to build wheel for mujoco-py"
pip install gym and import gym work fine on my laptop and import mujoco_py works too, but I'm still getting a "failed…

abhimanyu bahree
- 83
- 1
- 7