0

I've noticed that when not specifying the validation_split this parameter will be automatically set to 0. Now I've used early stopping all this time without having a validation_split. What I find weird however is that even without specifying the validation split it still stops my training early. Which is surprising to me since I thought training would only stop when the validation score isn't improving. Any idea why this happens? I would love to know.

1 Answers1

0

Not knowing about your framework.

I can only guess that it could stop earlier when the training loss slightly increase, instead of the validation loss.

Daniel Exxxe
  • 59
  • 1
  • 7
  • Yeah, I thought that aswell at first. However that's not the case. There is an actual val_loss variable at the end of each epoch and training stops when there is no increase in this val_loss variable for x number of epochs. – Brockenspook May 23 '22 at 22:52
  • Nevermind for that. Maybe you are right, if val_loss goes constant (not improving) then the training also stops. – Daniel Exxxe May 23 '22 at 23:12