Questions tagged [keras-rl]

keras-rl is a Reinforcement Learning library based on Keras

The code can be found at github.com/matthiasplappert/keras-rl.

81 questions
1
vote
1 answer

FailedPreconditionError while using DDPG RL algorithm, in python, with keras, keras-rl2

I am training a DDPG agent on my custom environment that I wrote using openai gym. I am getting error during training the model. When I search for a solution on web, I found that some people who faced similar issue were able to resolve it by…
1
vote
0 answers

Model output "Tensor("activation_9/activation_9/Identity:0", shape=(?, 6), dtype=float32)" has invalid shape

I am getting this error when I am trying to build an DQN model but I am getting this error: ValueError Traceback (most recent call last) in () 1 # TODO - Select the…
1
vote
0 answers

Keras RL not implemented error from overwritten class method

I've been working on an RL agent to do the Taxi problem in openai gym. I picked the DQNAgent from keras-rl and I am following along with the example here: https://tiewkh.github.io/blog/deepqlearning-openaitaxi/ import gym from gym import wrappers,…
Sledge
  • 1,245
  • 1
  • 23
  • 47
1
vote
1 answer

Numpy error in file "mtrand.pyx" while fitting a keras model

I am using: keras-rl2 : 1.0.4 tensorflow : 2.4.1 numpy : 1.19.5 gym 0.18.0 For the training of a DQN model for a reinforcement learning project. My action space contains 60 dicrete values: self.action_space = Discrete(60) and I am getting this…
Vincent Roye
  • 2,751
  • 7
  • 33
  • 53
1
vote
1 answer

How to control learning rate in KerasR in R

To fit a classification model in R, have been using library(KerasR). To control learning rate and KerasR says compile(optimizer=Adam(lr = 0.001, beta_1 = 0.9, beta_2 = 0.999, epsilon = 1e-08, decay = 0, clipnorm = -1, clipvalue = -1), loss =…
iHermes
  • 314
  • 3
  • 12
1
vote
0 answers

How to drive keras-rl training from an external application?

I'm trying to use keras-rl to train and use an AI for a game that's written in C++ with Python bindings. This is my first time using keras-rl, and I'm finding its expectations at odds with the way the game AI interface is implemented. As far as I…
Uri Granta
  • 1,814
  • 14
  • 25
1
vote
0 answers

Unsuccessful adding a new function to Keras-rl core.py

Assume that I simply copy and paste the test function in core.py. Then I change the new function's name to test2. Now, I have two identical functions test and test2 in core.py . Then, in one of the DQN examples, say dqn_cartpole.py, I…
Soheil
  • 31
  • 3
1
vote
0 answers

Using an environment outside Gym (OpenAI)

I have a bunch of questions that I wanna ask regarding the usage of this library('keras-rl') while using it outside the "Gym" environment. I understand that there are very few users of this library so I may accept a better alternative library. I am…
neel g
  • 1,138
  • 1
  • 11
  • 25
1
vote
1 answer

Is it possible to train with tensorflow 1 using float16?

Currently train keras on tensorflow model with default setting - float32. Post training the network is quantized: cast weights to float16. This improves performance by ~x3 while keeping the same accuracy. I was trying to train from start using…
YoavEtzioni
  • 85
  • 10
1
vote
0 answers

How to access/manipulate elements of tensor in keras model?

I want to test a new network structure which requires changing some of the elements of a tensor in a keras model. If I could find a way to convert/copy the tensor to a numpy array and then later transform it back into a tensor, then I should be able…
1
vote
0 answers

My dqn doesnt't work well: reward doesn't change, loss continues to increase

I'm trying to train Gradius with gym-retro and DQNAgent of keras-rl, but it doesn't work well. reward doesn't increase, loss continues to increase. I can't understand what is wrong. A part of output is…
Itsme
  • 11
  • 2
1
vote
1 answer

keras rl - dqn model update

I am reading through the DQN implementation in keras-rl /rl/agents/dqn.py and see that in the compile() step essentially 3 keras models are instantiated: self.model : provides q value predictions self.trainable_model : same as self.model but has…
tenticon
  • 2,639
  • 4
  • 32
  • 76
1
vote
0 answers

Keras random model

Is there a way of getting an object of the class Model of keras that selects a class randomly. Truly randomly each time, not only blocking training and evaluating with the initialization weights of the network. I need to pass a Model to the library…
Angelo
  • 575
  • 3
  • 18
1
vote
0 answers

Measuring episode rewards when using epsilon greedy policy with linear annealing on epsilon

Is there a standard practice or a tool in Keras that will give an estimate of the episode rewards that is decorrelated with epsilon during training? In training the following dqn network, I can measure the episode rewards over time during training,…
Chubbs
  • 171
  • 4
1
vote
0 answers

openai gym custom environment action_space and observation_space howto

I am trying ti implement custom openai gym environment. Both action space and observation space contains a combination of list of values and discrete spaces. Did I model it correctly? For example: self.action_space = spaces.Tuple(( …
HNN
  • 39
  • 7