Questions tagged [keras-rl]

keras-rl is a Reinforcement Learning library based on Keras

The code can be found at github.com/matthiasplappert/keras-rl.

81 questions
0
votes
0 answers

Could I use tf.session() in enviroment where keras only used?

Thanks for reading my question. I was using keras to develop my reinforcement learning agent based on keres-rl. But I want to upgrade my agent so that I get some update from open ai base line code for better action exploration. But the code used…
verystrongjoe
  • 3,831
  • 9
  • 35
  • 66
0
votes
2 answers

Where Can I find the implemented DQfDAgent?

I'm trying to use that Object as This Blog uses in His Code but when I do from rl.agents.dqn import DQfDAgent it returns me and error ImportError: cannot import name 'DQfDAgent'. I've done a dir(rl.agents.dqn) and there is no DQfDAgent object so,…
Angelo
  • 575
  • 3
  • 18
0
votes
0 answers

Reinforcement learning: why does the accuracy of the learning drops after restarting the training?

I have developed a small reinforcement learning exercise. The problem is that the accuracy of the training drops enormously after restarting the training which I don't really understand. The environment: - I use keras rl, a simple neuronal model,…
pittnerf
  • 739
  • 1
  • 6
  • 17
0
votes
1 answer

How to call LSTM function?

I am just getting started with LSTM time series forecasting example. Getting a below error at the last step, not sure what am i missing here. Any help would be greatly appreciated!.ERROR- NameError: name 'to_list' is not defined def…
RSingh
  • 51
  • 1
  • 6
0
votes
0 answers

Scale actor network output to the action space bounds in Keras Rl

I am trying to implement DDPG from Keras RL and have the following actor network. actor = Sequential() actor.add(Flatten(input_shape=(1,) +…
CS101
  • 444
  • 1
  • 6
  • 21
0
votes
1 answer

Keras-RL episodes returning same values after fitting model

So I have created a custom environment using OpenAI Gym. I'm closely following the keras-rl examples of the DQNAgent for the CartPole example which leads to the following implementation: nb_actions = env.action_space.n # Option 1 : Simple…
1 2 3 4 5
6