Learners,
I want to train a NN with mini-batches using a custom loss function
. Every mini-batch contains n new samples and m replay samples
. The replay samples are used for replay to avoid forgetting.
my loss fnc looks like:
loss=mse(new_samples_truth, new_samples_pred) + factor*mse(replay_samples_truth, replay_sampls_pred)
As you can see the loss is a weighted sum of two mse calculated separately for the new samples and replayed samples. That means whenever I want to train a batch, I want to separate the new and replay data points and calculate a scalar loss for the entire batch.
How can I implement this loss function in Keras
and use it with train_on_batch? The train_on_batch method from Keras seems to calculate the loss with the loss function for every data point in the mini-batch separately. Since my batch contains new and replay datapoints this will not work. So how can I Keras make calculate the Loss for the entire batch at once and return only once scalar?
Also it seems as if Keras evaluates the loss fnc for every data point in batch separately and saves the losses per sample in an array. However, I want to get the loss for the entire batch. Does anybody understand how Keras actually handles the loss calculation for batches?
Here is my pseudo code
batch=pd.concat([new_samples, replay_samples]) #new_samples and replay_samples are pd.dataframes
#len(batch) = 20
def my_replay_loss(factor):
def loss(y_true, y_pred): #y_true and y_pred come from keras
y_true_new_samples = y_true.head(10)
y_pred_new_samples = y_pred.head(10)
y_true_replay_samples = y_true.tail(10)
y_pred_replay_samples = y_pred.tail(10)
calc_loss = mse(y_true_new_samples, y_pred_new_samples) + factor*mse(y_true_replay_samples, y_pred_replay_samples)
return calc_loss
return loss
'''