I have an answer to a problem that is pretty similar to this, here.
Basically, it is not possible to monitor multiple metrics with keras callbacks. However, you could define a custom callback (see the documentation for more info) that can access the logs at each epoch and do some operations.
Let's say if you want to monitor loss
and val_loss
you can do something like this:
import tensorflow as tf
from tensorflow import keras
class CombineCallback(tf.keras.callbacks.Callback):
def __init__(self, **kargs):
super(CombineCallback, self).__init__(**kargs)
def on_epoch_end(self, epoch, logs={}):
logs['combine_metric'] = logs['val_loss'] + logs['loss']
Side note: the most important thing in my opinion is to monitor the validation loss. Train loss of course will keep dropping, so it is not really that meaningful to observe. If you really want to monitor them both I suggest you add a multiplicative factor and give more weight to validation loss. In this case:
class CombineCallback(tf.keras.callbacks.Callback):
def __init__(self, **kargs):
super(CombineCallback, self).__init__(**kargs)
def on_epoch_end(self, epoch, logs={}):
factor = 0.8
logs['combine_metric'] = factor * logs['val_loss'] + (1-factor) * logs['loss']
Then, if you only want to monitor this new metric during the training, you can use it like this:
model.fit(
...
callbacks=[CombineCallback()],
)
Instead, if you also want to stop the training using the new metric, you should combine the new callback with the early stopping callback:
combined_cb = CombineCallback()
early_stopping_cb = keras.callbacks.EarlyStopping(monitor="combine_metric")
model.fit(
...
callbacks=[combined_cb, early_stopping_cb],
)
Be sure to get the CombinedCallback
before the early stopping callback in the callbacks list.
Moreover, you can draw more inspiration here.