Questions tagged [keras-rl]

keras-rl is a Reinforcement Learning library based on Keras

The code can be found at github.com/matthiasplappert/keras-rl.

81 questions
1
vote
1 answer

Tensorflow, OpenAI Gym, Keras-rl performance issue on basic reinforcement learning example

I'm doing reinforcement learning, and I'm having trouble with performance. Situation, no custom code: I loaded a Google Deep Learning VM (https://console.cloud.google.com/marketplace/details/click-to-deploy-images/deeplearning) on Google Cloud.…
1
vote
1 answer

Keras Reinforcement Learning: How to pass reward to the model

import numpy as np import gym from gym import wrappers # 追加 from keras.models import Sequential from keras.layers import Dense, Activation, Flatten from keras.optimizers import Adam from rl.agents.dqn import DQNAgent from rl.policy import…
leppy
  • 49
  • 4
1
vote
1 answer

InvalidArgumentError while using Keras backend function

I am using Keras backend function to compute the gradient in reinforcement learning set up and following is the snippet of code. For this code, I am getting an error which is below as well. What could be the reason for it? 1 X =…
thetna
  • 6,903
  • 26
  • 79
  • 113
1
vote
0 answers

ImportError: cannot import name pywrap_dlopen_global_flags

I get the following error when trying to use tensorflow (version - newest as of date of posting) on Hardware - MacBookPro CPU with OS - Dual Boot Ubuntu 16.04 LTS in a virtualenv —no-site-packages with Keras and keras-rl and python 2.7. ... Using…
1
vote
2 answers

What does the EpisodeParameterMemory of keras-rl do?

I have found the keras-rl/examples/cem_cartpole.py example and I would like to understand, but I don't find documentation. What does the line memory = EpisodeParameterMemory(limit=1000, window_length=1) do? What is the limit and what is the…
Martin Thoma
  • 124,992
  • 159
  • 614
  • 958
0
votes
0 answers

Tensorboard not working with Keras-rl, why?

When I want to execute Tensorboard with keras-rl (DQNAgent): tb_callback = TensorBoard('/home/jose/TED/MLU_minimization/logs', update_freq=1) dqn.fit(env, nb_steps=5000000, visualize=False, verbose=1, nb_max_episode_steps=None, log_interval=10000,…
0
votes
1 answer

Keras model is compiled with training_v1.py instead of training.py

This is the minimal example to reproduce the problem: from keras.models import Sequential from keras.layers import Dense, Flatten, LeakyReLU from keras.regularizers import l1 from rl.agents.dqn import DQNAgent reg = l1(1e-5) relu_alpha =…
Luca
  • 169
  • 8
0
votes
0 answers

DQN with LSTM layers in Keras-rl2, understanding input_shape

I'm working on a DQN model that trains on a CustomEnv from OpenAi Gymnasium. My observation space has just one dimension, with shape (8,) and that's going to be the input of my neural network. I first used a model with full dense layers like so: def…
Aldair CB
  • 135
  • 1
  • 1
  • 6
0
votes
1 answer

How to use masking in keras-rl with DQNAgent?

I'm working on a project where I want to train an agent to find optimal routes in a road network (Graph). I build the custom Env with OpenAI Gym, and I'm building the model and training the agent with Keras and Keras-rl respectively. The problem is…
Aldair CB
  • 135
  • 1
  • 1
  • 6
0
votes
0 answers

I have a problem with keras rl2 DQAgent model, it adds another dim to my states for some reason and I get Value error

On the last day, I'm trying to deal with an error I get in the DQNAGENT fit function. I get the following error: ValueError: Error when checking input: expected dense_input to have 2 dimensions, but got array with shape (1, 3, 4) in dqnagent.fit…
kfir
  • 1
  • 2
0
votes
0 answers

Using keras RL to build an agent to play space invaders, running into "AttributeError: 'int' object has no attribute 'shape'" Error

Just as the title says, I keep running into an error when following a tutorial to make a reinforcement learning agent using keras RL. The code of which is below: import gym import random import numpy as np from tensorflow.keras.models import…
0
votes
1 answer

AttributeError: 'Sequential' object has no attribute '_compile_time_distribution_strategy'

I'm trying to train an agent using TensorFlow and Keras-rl2 to be able to play a gym environment called CartPole-v1 and I'm using google colaboratory this's my implementation: !pip install gym[classic_control] !pip install keras-rl2 import…
0
votes
1 answer

keras-rl2: DQN agent training issue on Taxi-v3

I am trying to use keras-rl2 DQNAgent to solve the taxi problem in open AI Gym. For a quick refresh, please find it in Gym-Documentation, thank you! https://www.gymlibrary.dev/environments/toy_text/taxi/ Here are my process: 0.Open the Taxi-v3…
0
votes
1 answer

What is the best way to model an environment to force an agent to select `x out of n` choices?

I have an RL problem where I want the agent to make a selection of x out of an array of size n. I.e. if I have [0, 1, 2, 3, 4, 5] then n = 6 and if x = 3 a valid action could be [2, 3, 5]. Right now what I tried is have n scores: Output n continuous…
Olli
  • 906
  • 10
  • 25
0
votes
0 answers

KerasRl ValueError: Error when checking input: expected input_3 to have 3 dimensions, but got array with shape (1, 1, 9, 9)

I made an env with Gym for Sudoku puzzle and I want to train an AI on it using KerasRL (I've removed the step reset and render method of the environment to not have too much code for StackOverflow). I use a Flatten and 3 dense layers for my model…