0

According to these guys (https://nihit.github.io/resources/spaceinvaders.pdf) it is possible to perform Early Stopping with Deep Reinforcement Learning. I used that before with Deep Learning on Keras, but, how to do that on keras-rl? in the same fit() function or before sending the model to the agent?

Ashwin Geet D'Sa
  • 6,346
  • 2
  • 31
  • 59
mad
  • 2,677
  • 8
  • 35
  • 78

1 Answers1

3

It looks like you could just use keras's callback; if you really want it in the package, grab it from here and put it in here. Otherwise, I would try:

from keras.callbacks import EarlyStopping

early_stop = EarlyStopping(patience=69) # epochs stagnation before termination

# from their example cem_cartpole.py
cem.fit(env, nb_steps=100000, visualize=False, callbacks=[early_stop], verbose=2)

TheLoneDeranger
  • 1,161
  • 9
  • 13
  • I get the message: `WARNING:tensorflow:Early stopping conditioned on metric "mae" which is not available. Available metrics are: episode_reward,nb_episode_steps,nb_steps`. How to define the metric to be used? – Luca May 15 '22 at 16:10