Questions tagged [openai-gym]

OpenAI Gym is a platform for reinforcement learning research that aims to provide a general-intelligence benchmark with a wide variety of environments.

1033 questions
5
votes
1 answer

python openAI retro module

I am trying to use the retro module and I jupyter notebooks I seemed to install it with a !pip install retro where it went thru the download/install ok. But when I try to import retro I get an error ` Traceback (most recent call last): File…
bbartling
  • 3,288
  • 9
  • 43
  • 88
5
votes
1 answer

Open AI Gym Runs in Realtime instead of as fast as possible

Ok so there must be some option in OpenAI gym that allows it to run as fast as possible? I have a linux environment that does exactly this(run as fast as possible), but when I run the exact setup on Windows, it instead runs it only in real-time. The…
stoplime
  • 81
  • 1
  • 5
5
votes
1 answer

OpenAI gym's breakout-v0 "pauses"

While training in the OpenAI gym environment I have the idea that the environment sometimes "stops". For many frames in a row no ball is visible/stops spawning. Is this an error in the gym environment? Is this something that is part of the game…
rmeertens
  • 4,383
  • 3
  • 17
  • 42
5
votes
2 answers

OpenAI gym: when is reset required?

Although I can manage to get the examples and my own code to run, I am more curious about the real semantics / expectations behind OpenAI gym API, in particular Env.reset() When is reset expected/required? At the end of each episode? Or only after…
Juan Leni
  • 6,982
  • 5
  • 55
  • 87
4
votes
0 answers

Erroneous and inconsistent output from env.render() in openai gym Taxi-v3 in Google Colab

I am trying to set up the OpenAI gym environment for the Taxi - V3 application in Google Colab and using the following code : from IPython.display import clear_output import gym env = gym.make("Taxi-v3", render_mode = 'ansi').env #env =…
Calcutta
  • 1,021
  • 3
  • 16
  • 36
4
votes
2 answers

How to use continuous values in the action space of a gym environment?

I am trying to make a custom gym environment with five actions, all of which can have continuous values. To implement the same, I have used the following action_space format: self.action_space =…
Ravish Jha
  • 481
  • 3
  • 25
4
votes
1 answer

Stablebaselines3 logging reward with custom gym

I have this custom callback to log the reward in my custom vectorized environment, but the reward appears in console as always [0] and is not logged in tensorboard at all class TensorboardCallback(BaseCallback): """ Custom callback for…
Mario
  • 13,941
  • 20
  • 54
  • 110
4
votes
2 answers

ModuleNotFoundError: No module named 'stable_baselines3'

I'm trying to learn reinforcement learning, doing coding on Jupyter notebook. But when I try to install stable baselines I'm getting an error even though I've installed it and upgraded it several times. I'm attaching the screenshots as well.…
Anandakrishnan
  • 347
  • 1
  • 4
  • 18
4
votes
0 answers

Make OpenAI gym Monitor only store a video, but not render in real time

I am training an agent for the Cartpole environment of openAI gym, and storing a video, and then I'm trying to display it (doing this in jupyter notebook). I start with: env = gym.make('CartPole-v0') env = wrappers.Monitor(env, "./gym-results",…
user56834
  • 244
  • 4
  • 19
4
votes
1 answer

Headless servers Opengym AI rendering Error while using ray

While using ray for distributed computation, all the servers are headless (no display). Therefore, using "xvfb-run -s “-screen 0 1400x900x24” to create screen. Getting error pyglet.canvas.xlib.NoSuchDisplayException: Cannot connect to “None” Without…
4
votes
0 answers

how to get the dimension of Openai gym spaces.Tuple to be used in DQN when building neural network with Keras

I built a cumstom environment with Openai Gym spaces.Tuple because my observation is made up of: hour(0-23), day(1-7), month(1-12), which are discrete; four continuous numbers, which are from a csv file; and an array of shape (4*24), which are also…
Yuchen
  • 81
  • 5
4
votes
4 answers

RL problem on COLAB for 'gym.envs.box2d' has no attribute 'LunarLander'

What can I do in Colab to work with the env "LunarLander-v2" from OPENAI-gym. I have installed BOX2D and box2d-py but always return the same error: AttributeError: module 'gym.envs.box2d' has no attribute 'LunarLander' This passage in my local…
4
votes
1 answer

How to create a live matplotlib.pyplot plot in google colab?

Unfortunately it is not possible to create live plots in a google colab notebook using %matplotlib notebook like it is in a offline jupyter notebook on my PC. I found two similar questions answering how to achieve this for plotly plots (link_1,…
4
votes
1 answer

How to show episode in rendered openAI gym environment

If we look at the previews of the environments, they show the episodes increasing in the animation on the bottom right corner. https://gym.openai.com/envs/CartPole-v1/ .Is there a command to explicitly show that?
Steven
  • 41
  • 1
  • 3
4
votes
1 answer

How to solve the 'Attribute Error' in gym when colab is using GPU

here is my code: !pip install box2d-py==2.3.8 import gym env = gym.make('CarRacing-v0') The err-msg is: AttributeError: module 'gym.envs.box2d' has no attribute 'CarRacing' screenshot But the same code is ok when colab is using the CPU…
tobby liu
  • 41
  • 3