Questions tagged [keras-rl]

keras-rl is a Reinforcement Learning library based on Keras

The code can be found at github.com/matthiasplappert/keras-rl.

81 questions
2
votes
1 answer

TypeError: 'module' object is not callable Tensorboard in Keras

I am implementing a RL agent with policy gradient method. I define a dense network for actor and another dense network for critic. For example, my critic network is: state_input = Input(shape=(self.num_states,)) x = Dense(self.hidden_size,…
Mitra
  • 157
  • 2
  • 12
2
votes
1 answer

Questions About Deep Q-Learning

I read several materials about deep q-learning and I'm not sure if I understand it completely. From what I learned, it seems that Deep Q-learning calculates faster the Q-values rather than putting them on a table by using NN to perform a regression,…
mad
  • 2,677
  • 8
  • 35
  • 78
2
votes
1 answer

Processor class at keras-rl changes shapes

Well, I'm trying to give as an input a list of 10 integers to a model for keras-rl, but, as I am using a new environment of OpenAI-Gym I need to set my processor class as I want. My processor class looks like this: class RecoProcessor(Processor): …
Angelo
  • 575
  • 3
  • 18
2
votes
0 answers

keras_rl DQN agent - all policies select_action() func return value of 0 or 1

I am trying to setup a reinforcement learning project using Gym & kears_rl. Description: Given a numbers in a range (100, 200), I want the agent to alert me when a number is close to the limits, lets say between 0%-10% and 90%-100% of the…
2
votes
1 answer

ValueError: Input 0 is incompatible with layer conv2d_1: expected ndim=4, found ndim=5

I am trying to pass a RGB image from a simulator into my custom neural network. At the source of the RGB generation (simulator), the dimension of RGB image is (3,144,256). This is how I construct the neural network: rgb_model = Sequential() rgb =…
user10193823
  • 41
  • 1
  • 4
2
votes
1 answer

Keras - weights initialized as nans

I am trying to create a neural network for policy based RL. I have wrote the class to build the network and generate actions as below: class Oracle(object): def __init__(self, input_dim, output_dim, hidden_dims=None): if hidden_dims is None: …
shunyo
  • 1,277
  • 15
  • 32
2
votes
1 answer

Inverting Gradients in Keras

I'm trying to port the BoundingLayer function from this file to the DDPG.py agent in keras-rl but I'm having some trouble with the implementation. I modified the get_gradients(loss, params) method in DDPG.py to add this: action_bounds = [-30,…
siang
  • 23
  • 3
2
votes
3 answers

Importing keras-rl package into conda environment

I've installed keras-rl package on my computer, using their instructions: git clone https://github.com/matthiasplappert/keras-rl.git cd keras-rl python setup.py install So my conda environment sees this package, however when I am trying to import…
Massyanya
  • 2,844
  • 8
  • 28
  • 37
1
vote
0 answers

Errors when trying to save and load custom Tensorflow model

I want to save a custom model , i inspire the solution from https://www.tensorflow.org/guide/saved_model?hl=fr#specifying_signatures_during_export , i have a model class that inherit from tf.Module and i put the decorator @tf.function in the…
1
vote
1 answer

How to fix "cannot import name '__version__' from 'tensorflow.keras'"?

Trying to import DQNAgent like this from rl.agents.dqn import DQNAgent I get the following error: cannot import name '__version__' from 'tensorflow.keras' The installed versions are: Tensorflow: 2.13.0, Keras: 2.13.1, Keras-rl2: 1.0.5 I am using…
scopchanov
  • 7,966
  • 10
  • 40
  • 68
1
vote
1 answer

Understanding action & observation spaces in gym for custom environments and agents

I am currently trying to learn about reinforcement learning (RL). I am quite new to the field, and I apologize for the wall of text. I have encountered many examples of RL using TensorFlow, Keras, Keras-rl, stable-baselines3, PyTorch, gym, etc.…
AliG
  • 73
  • 6
1
vote
1 answer

Why are the mean_q and mae for keras-rl2 DQN agent logged as NaN

Copied the codes over from https://github.com/keras-rl/keras-rl/blob/master/examples/dqn_atari.py but only the rewards and number of steps are logged and the error metrics are all NaN memory = SequentialMemory(limit=1000000,…
Dukey
  • 11
  • 2
1
vote
2 answers

KerasRL : Value Error: Tensor must be from same graph as Tensor

I am trying to build a RL model to play the Atari Pinball game while following Nicholas Renotte's video. However, when I try to build the final KerasRL model I get the following error : ValueError: Tensor("dense/kernel/Read/ReadVariableOp:0",…
1
vote
1 answer

Training DQN Agent with Multidiscrete action space in gym

I would like to train a DQN Agent with Keras-rl. My environment has both multi-discrete action and observation spaces. I am adapting the code of this video: https://www.youtube.com/watch?v=bD6V3rcr_54&t=5s Then, I am sharing my code class…
mercury24
  • 53
  • 1
  • 9
1
vote
2 answers

Why is DQNAgent.fit adding extra dimensions to my input data?

I'm using one of Keras's deep q-learning agents: DQNAgent. When I pass my environment into DQNAgent.fit, I receive the following error: **3 dqn.fit(env, nb_steps=50000, visualize=False,…