0

I want to use early stopping method to avoid over fitting in neural network. I have divided my dataset to 60-20-20

60 - training 20 - validation set 20 - test set

I have a doubt while implementing early stopping.

  1. We update weights for one epoch using training set. We got the error in network using training set.
  2. We need to compute error for the validation set. Should we average out all the error for each validation instance?? E.g Lets say I have 200 validation instances. Since I am not updating weights, I will compute error for each instance. So should we average over all the validation instance and report that as the validation error??

Thanks, Atish

alex
  • 1,421
  • 1
  • 16
  • 19

2 Answers2

1

Yes, the most commonly used error measure is the mean squared error, which is an average of squared errors of each training/validation sample.

BartoszKP
  • 34,786
  • 15
  • 102
  • 130
1

yeah correct, you have to find out the point where the error on the validation set is increasing instead of decreseasing.

Thomas Haller
  • 199
  • 1
  • 11