Questions tagged [keras-rl]

keras-rl is a Reinforcement Learning library based on Keras

The code can be found at github.com/matthiasplappert/keras-rl.

81 questions
0
votes
1 answer

Keras-RL2 and Tensorflow 1-2 Incompatibility

I am getting; tensorflow.python.framework.errors_impl.OperatorNotAllowedInGraphError: Using a symbolic `tf.Tensor` as a Python `bool` is not allowed in Graph execution. Use Eager execution or decorate this function with @tf.function. Error while…
user10767584
0
votes
0 answers

to_categorical function in KerasR not working

I am using the to_categorical function in KerasR and it throws an error message in RStudio which is this: mnist_y <- to_categorical(mnist_y, 10) Collecting package metadata (current_repodata.json): ...working... failed CondaSSLError: OpenSSL appears…
gerluc
  • 25
  • 4
0
votes
0 answers

Continuous action space with discreate steps

How can I make a custom environment with a continues action space but with a specific step. For example: self.action_space = Box(low=np.array([0,0,0]), high=np.array([+1,+1,+1]), dtype=np.float32) gives a continues action space with 3 actions. So…
0
votes
0 answers

Cannot compile DQN agent: TypeError: ('Keyword argument not understood:', 'units')

I have this model: poss_in = layers.Input((1,)) poss_lr = layers.Dense(8, activation='relu')(poss_in) hist_in = layers.Input((100,)) hist_lr = layers.Reshape((100, 1))(hist_in) hist_lr = layers.LSTM(32)(hist_lr) hist_lr = layers.Dense(32,…
Ok-src
  • 1
  • 2
0
votes
1 answer

Keras-rl ValueError"Model has more than one output. DQN expects a model that has a single output"

Is there any way to get around this error? I have a model with a 15x15 input grid, which leads to two outputs. Each output has 15 possible values, which are x or y coordinates. I did this because it is significantly simpler than having 225 separate…
Mercury
  • 298
  • 1
  • 11
0
votes
1 answer

real tme keras rl DQN predictions

hello everyone I followed that tutorial https://www.youtube.com/watch?v=hCeJeq8U0lo&list=PLgNJO2hghbmjlE6cuKMws2ejC54BTAaWV&index=2 to train a DQN Agent everything works env = gym.make('CartPole-v0') states = env.observation_space.shape[0] actions =…
0
votes
0 answers

ValueError: Input 0 of layer Conv2D_1 is incompatible with the layer: expected ndim=4, found ndim=5. Full shape received: [None, 3, 210, 160, 3]

I want to build a CNN model that takes 3 successive images insetead of one, so the input takes the shape: (3,height, width, channels=3) : from tensorflow.keras.layers import Input, Conv2D, MaxPooling2D, Dropout, Dense, Flatten,Convolution2D from…
0
votes
1 answer

How to use a trained RL model to make a prediction?

I would like to use my trained RL model for a discrete test prediction. This is how the model is built: model = Sequential() model.add(Dense(60, activation='relu', input_shape=states)) model.add(Dense(60, activation='relu',…
Vincent Roye
  • 2,751
  • 7
  • 33
  • 53
0
votes
1 answer

Dueling DQN updates model architecture and causes issues

I create an initial network model with the following acrchitecture. def create_model(env): dropout_prob = 0.8 #aggresive dropout regularization num_units = 256 #number of neurons in the hidden units model = Sequential() …
0
votes
1 answer

How to install keras-rl in Anaconda

I am starting to work on a reinforcement learning model, but I am blocked at the moment as I have not been able to download one of the essential python packages yet: keras-rl. More specifically, I would like to import the following 3 utilities: from…
0
votes
3 answers

Keras LSTM layers in Keras-rl

I am trying to implement a DQN agent using Keras-rl. The problem is that when I define my model I need to use an LSTM layer in the architecture: model = Sequential() model.add(Flatten(input_shape=(1, 8000))) model.add(Reshape(target_shape=(200,…
Anto
  • 119
  • 1
  • 13
0
votes
2 answers

Cant import keras-rl in jupyter notebooks

I have been trying to import kera-rl into my jupyter notebook but i get this error every time i try. ModuleNotFoundError: No module named 'rl' How do I stop getting this error?
0
votes
0 answers

Memory error when using keras-rl for reinforcement learning

I use the keras-rl and run the eample of keras-rl, namely, dqn_cartpole.py successfully. Then I change the env_name to play the Pong game, i.e., env_name = "PongNoFrameskip-v4". All thing looks good, however, the program break suddenly with Memory…
LinTIna
  • 481
  • 1
  • 6
  • 14
0
votes
1 answer

Deep Reinforcement Learning (keras-rl) Early stopping

According to these guys (https://nihit.github.io/resources/spaceinvaders.pdf) it is possible to perform Early Stopping with Deep Reinforcement Learning. I used that before with Deep Learning on Keras, but, how to do that on keras-rl? in the same…
mad
  • 2,677
  • 8
  • 35
  • 78
0
votes
2 answers

Deep Reinforcement Learning Training Accuracy

I am using a deep reinforcement learning approach to predict time series behavior. I am quite a newbie on that so my question is more conceptual than a computer programming one. My colleague has given me the following chart, with training,…