0

I have run a model with 4 epochs and using early_stopping.

early_stopping = EarlyStopping(monitor='val_loss', mode='min', patience=2, restore_best_weights=True)
history = model.fit(trainX, trainY, validation_data=(testX, testY), epochs=4, callbacks=[early_stopping])

Epoch 1/4
812/812 [==============================] - 68s 13ms/sample - loss: 0.6072 - acc: 0.717 - val_loss: 0.554 - val_acc: 0.7826
Epoch 2/4
812/812 [==============================] - 88s 11ms/sample - loss: 0.5650 - acc: 0.807 - val_loss: 0.527 - val_acc: 0.8157
Epoch 3/4
812/812 [==============================] - 88s 11ms/sample - loss: 0.5456 - acc: 0.830 - val_loss: 0.507 - val_acc: 0.8244
Epoch 4/4
812/812 [==============================] - 51s 9ms/sample - loss: 0.658 - acc: 0.833 - val_loss: 0.449 - val_acc: 0.8110

The highest val_ac corresponds to the third epoch, and is 0.8244. However, the accuracy_score function will return the last val_acc value, which is 0.8110.

yhat = model.predict_classes(testX)
accuracy = accuracy_score(testY, yhat)

It is possible to specify the epoch while calling the predict_classesin order to get the highest accuracy (in this case, the one corresponding to the third epoch) ?

Kyv
  • 615
  • 6
  • 26

1 Answers1

0

It looks like early-stopping isn't being trigged because you're only training for 4 epochs and you've set early stopping to trigger when val_loss doesn't decrease over two epochs. If you look at your val_loss for each epoch, you can see it's still decreasing even on the fourth epoch.

So simply put, your model is just running the full four epochs without using early stopping, which is why it's using the weights learned in epoch 4 rather than the best in terms of val_acc.

To fix this, set monitor='val_acc' and run for a few more epochs. val_acc only starts to decrease after epoch 3, so earlystopping won't trigger until epoch 5 at the earliest.

Alternatively you could set patience=1 so it only checks a single epoch ahead.

ML_Engine
  • 1,065
  • 2
  • 13
  • 31
  • Thank you for your answer @ML_Engine . I run the model for many more epochs. I just took some to ask the question. I thing, what I needed was changing `monitor='val_loss'` to `monitor='val_acc'`. I will try with that. – Kyv Apr 14 '21 at 13:14
  • Glad it helped! Would appreciate an 'accept' if I was able to help :) – ML_Engine Apr 14 '21 at 13:15
  • I was testing it. `early_stopping = EarlyStopping(monitor='val_acc', mode='max', patience=4)/// history = model.fit(trainX, trainY, validation_data=(testX, testY), epochs=25, verbose=1, batch_size=32, callbacks=[early_stopping])`. Unfortunately `accuracy_score(testY, yhat)` still returns the last `val_acc`. Is there anything I am missing ? – Kyv Apr 14 '21 at 21:36
  • Can you post your training output again please? For each epoch – ML_Engine Apr 15 '21 at 08:46
  • Sorry, I have already reset it. But it looks like the one I have provided in the question. – Kyv Apr 15 '21 at 09:45
  • Is there any point at which the accuracy does not improve over 4 epochs? If not then it won't trigger early stopping. You could also try adding `min_delta` to be something like 0.01 which means it must increase by 1% over the number of epochs – ML_Engine Apr 15 '21 at 10:26