I am training a DDPG agent on my custom environment that I wrote using openai gym. I am getting error during training the model.
When I search for a solution on web, I found that some people who faced similar issue were able to resolve it by…
I am getting this error when I am trying to build an DQN model but I am getting this error:
ValueError Traceback (most recent call last)
in ()
1 # TODO - Select the…
I've been working on an RL agent to do the Taxi problem in openai gym.
I picked the DQNAgent from keras-rl and I am following along with the example here:
https://tiewkh.github.io/blog/deepqlearning-openaitaxi/
import gym
from gym import wrappers,…
I am using:
keras-rl2 : 1.0.4
tensorflow : 2.4.1
numpy : 1.19.5
gym 0.18.0
For the training of a DQN model for a reinforcement learning project.
My action space contains 60 dicrete values:
self.action_space = Discrete(60)
and I am getting this…
To fit a classification model in R, have been using library(KerasR). To control learning rate and KerasR says
compile(optimizer=Adam(lr = 0.001, beta_1 = 0.9, beta_2 = 0.999, epsilon = 1e-08, decay = 0, clipnorm = -1, clipvalue = -1), loss =…
I'm trying to use keras-rl to train and use an AI for a game that's written in C++ with Python bindings. This is my first time using keras-rl, and I'm finding its expectations at odds with the way the game AI interface is implemented.
As far as I…
Assume that I simply copy and paste the test function in core.py. Then I change the new function's name to test2. Now, I have two identical functions test and test2 in core.py .
Then, in one of the DQN examples, say dqn_cartpole.py, I…
I have a bunch of questions that I wanna ask regarding the usage of this library('keras-rl') while using it outside the "Gym" environment. I understand that there are very few users of this library so I may accept a better alternative library.
I am…
Currently train keras on tensorflow model with default setting - float32.
Post training the network is quantized: cast weights to float16. This improves performance by ~x3 while keeping the same accuracy.
I was trying to train from start using…
I want to test a new network structure which requires changing some of the elements of a tensor in a keras model. If I could find a way to convert/copy the tensor to a numpy array and then later transform it back into a tensor, then I should be able…
I'm trying to train Gradius with gym-retro and DQNAgent of keras-rl, but it doesn't work well. reward doesn't increase, loss continues to increase. I can't understand what is wrong.
A part of output is…
I am reading through the DQN implementation in keras-rl /rl/agents/dqn.py and see that in the compile() step essentially 3 keras models are instantiated:
self.model : provides q value predictions
self.trainable_model : same as self.model but has…
Is there a way of getting an object of the class Model of keras that selects a class randomly. Truly randomly each time, not only blocking training and evaluating with the initialization weights of the network.
I need to pass a Model to the library…
Is there a standard practice or a tool in Keras that will give an estimate of the episode rewards that is decorrelated with epsilon during training?
In training the following dqn network, I can measure the episode rewards over time during training,…
I am trying ti implement custom openai gym environment. Both action space and observation space contains a combination of list of values and discrete spaces.
Did I model it correctly?
For example:
self.action_space = spaces.Tuple((
…