I am doing an experiment where I need to keep track of when the model would eventually early stop, but without actually early stopping training. Why? Because I need to analyze how the model would actually behave after starting to apparently overfit.
So, is there a way of keeping track of when the model would early stop without actually early stopping?
Of course, one could manually keep track of the loss (or performance) and manually decide when the model would be overfitting (e.g. after 5 iterations of no improvement on the validation data, we could record a possible "overfitting" situation), but I was wondering if there's a callback or something that would allow me to do this automatically.