OpenAI Gym is a platform for reinforcement learning research that aims to provide a general-intelligence benchmark with a wide variety of environments.
Questions tagged [openai-gym]
1033 questions
3
votes
5 answers
Exception: ROM is missing for ms_pacman, see https://github.com/openai/atari-py#roms for instructions
I am totally new to OpenAi gym, I have just installed gym and then try to make environment for
env = gym.make('MsPacman-v0') so I am getting the following error:
---------------------------------------------------------------------------
Exception …

theansaricode
- 132
- 1
- 3
- 16
3
votes
0 answers
ImportError: DLL load failed while importing _multiarray_umath: The specified module could not be found
I am creating a new environment using anaconda in windows with some packages
conda create -n myenv
conda activate myenv
conda install python
conda install cvxopt
conda install gym
conda install networkx
conda install pandas
conda install…

kosa
- 262
- 4
- 17
3
votes
0 answers
Deployment of a DeepRL model trained on a custom OpenAI-GYM environment
I developed a custom OpenAI-GYM environment and trained a CDQN model on it, now I am trying to figure out how can I test it not using my gym environment but in production (using real world observations), do you guys have any resources that can help…

BAKYAC
- 155
- 2
- 12
3
votes
1 answer
Multiple-Actions in one step, Reinforcement learning
I am trying to write a custom openAI Gym environment in which the agent takes 2-actions in each step, one of which is a discrete action and the other is continuous one. I am using Ray RLLib and using SAC algorithm as it supports both discrete and…

JoCode
- 31
- 2
3
votes
1 answer
How to build a DQN that outputs 1 discrete and 1 continuous value as a pair?
I am building a DQN for an Open Gym environment. My observation space is only 1 discrete value but my actions are:
self.action_space = (Discrete(3), Box(-100, 100, (1,)))
ex: [1,56], [0,24], [2,-78]...
My current neural network is:
model =…

Vincent Roye
- 2,751
- 7
- 33
- 53
3
votes
1 answer
ValueError: not enough values to unpack (expected 2, got 1) custom environment
I have a environment which has the custom architecture like this:
class environment(gym.Env):
metadata ={'render.modes': ['human']}
ACTION = ['buy', 'do not buy']
def __init__(self, df):
pass
…

Sunshine
- 181
- 1
- 3
- 15
3
votes
1 answer
Understanding and Evaluating different methods in Reinforcement Learning
I have been trying to implement the Reinforcement learning algorithm on Python using different variants like Q-learning, Deep Q-Network, Double DQN and Dueling Double DQN. Consider a cart-pole example and to evaluate the performance of each of these…

mkpisk
- 152
- 1
- 9
3
votes
2 answers
Record OpenAI gym Video with Monitor
I want to record a video of my rollouts of OpenAIs gym. I use the Monitor class, but other solutions are also appreciated. This is a minimal example I created, that runs without exceptions or warnings:
import gym
from gym.wrappers import Monitor
env…

don-joe
- 600
- 1
- 4
- 12
3
votes
1 answer
Simple DQN to slow to train
I have been trying to solve the OpenAI lunar lander game with a DQN taken from this paper
https://arxiv.org/pdf/2006.04938v2.pdf
The issue is that it takes 12 hours to train 50 episodes so something must be wrong.
import os
import random
import…

Marc
- 16,170
- 20
- 76
- 119
3
votes
0 answers
Unable to find render using OpenGL
I am using a mac, and am trying to render the environment from open ai's gym
import gym
env= gym.make('CartPole-v1')
img = env.render()
ImportError: Can't find framework /System/Library/Frameworks/OpenGL.framework.
During handling of the above…

CarterB
- 502
- 1
- 3
- 13
3
votes
0 answers
OpenAI Gym Observation Space with Discrete and Box values
I'm trying to create a custom environment for OpenAi Gym.
My observation space will have some values such as the following:
readings: 10x -1 to 1 continuous
count: 0 to 1000 discrete
on/off: 0 or 1 discrete
From the docs it seems I can create…

blissweb
- 3,037
- 3
- 22
- 33
3
votes
0 answers
assert observation is not None AssertionError when creating observation space for custom environment
EDIT: Fixed it eventually. Solution in bottom of question
I want to create a custom environment to play a game. It plays by using a screengrab of the game as an input and using a DQN outputs either jump, or don't. I have tried a few ways of creating…

Otto Hodne-Tandberg
- 31
- 3
3
votes
2 answers
OpenAI Gym: Walk through all possible actions in an action space
I want to build a brute-force approach that tests all actions in a Gym action space before selecting the best one. Is there any simple, straight-forward way to get all possible actions?
Specifically, my action space is
import gym
action_space =…

stefanbschneider
- 5,460
- 8
- 50
- 88
3
votes
3 answers
How to check out actions available in OpenAI gym environment?
It seems like the list of actions for Open AI Gym environments are not available to check out even in the documentation. For example, let's say you want to play Atari Breakout. The available actions will be right, left, up, and…

Ji Hwan Park
- 71
- 2
- 6
3
votes
1 answer
How to restore previous state to gym environment
I'm trying to implement MCTS on Openai's atari gym environments, which requires the ability to plan: acting in the environment and restoring it to a previous state. I read that this can be done with the ram version of the games:
recording the…

toxin9
- 81
- 7