I have a Keras model (Sequential) in Python 3:
class LossHistory(keras.callbacks.Callback):
def on_train_begin(self, logs={}):
self.matthews_correlation = []
def on_epoch_end(self, batch, logs={}):
self.matthews_correlation.append(logs.get('matthews_correlation'))
...
model.compile(loss='mean_squared_error', optimizer='adam', metrics=['matthews_correlation'])
history = LossHistory()
model.fit(Xtrain, Ytrain, nb_epoch=10, batch_size=10, callbacks=[history])
scores = model.evaluate(Xtest, Ytest, verbose=1)
...
MCC = matthews_correlation(Ytest, predictions)
The model.fit() prints out - supposedly according to metrics = ['matthews_correlation'] part - progress and a Matthews Correlation Coefficient (MCC). But they are rather different from what MCC in the end gives back. The MCC function in the end gives the overall MCC of the prediction and is consistent with the MCC function of sklearn (i.e. I trust the value).
1) What are the scores from model.evaluate()? They are totally different from the MCC in the end or the MCCs of the epochs.
2) What are the MCCs from the epochs? It looks like this:
Epoch 1/10 580/580 [===========] - 0s - loss: 0.2500 - matthews_correlation: -0.5817
How are they calculated and why do they differ so much from the MCC in the very end?
3) Can I somehow add the function matthews_correlation() to the function on_epoch_train()? Then I could print out the MCC independently calculated. I don't know what Keras implicitly does.
Thanks for your help.
Edit: Here is an example how they record a history of loss. If I print(history.matthews_correlation), I get a list of the same MCCs that the progress report gives me.