-1

I am training a CNN and I am getting results of 85% accuracy in the training set, and 65% accuracy in the test set.

Is it okey to assume that, with a proper setting of the regularization of the network (dropout and L2 in my case), my test accuracy should get very close to my training accuracy (which will at the same time decrease as the regularization increases) ?

So let's say for instance, a 75%-74% accuracy ?

sdiabr
  • 487
  • 1
  • 8
  • 23
  • This question might belong better in the Cross Validated stack exchange. – enumaris Mar 29 '18 at 22:21
  • Okey, thanks, I am still getting to know this forum. Any question related to neural networks should then go to Cross Validated? – sdiabr Mar 30 '18 at 07:53
  • StackOverflow is usually for helping to debug code (hence the name derived from the stack trace of errors that might pop up). Cross Validated is for more general Machine Learning/Data Science kind of questions. – enumaris Mar 30 '18 at 18:39
  • All right all right, thanks for the info. – sdiabr Mar 30 '18 at 20:07

1 Answers1

1

With a proper setting of the regularization of all parameters of the network and with a well representative data batch, you should have a small difference between your test accuracy and your training accuracy. But of course you need to optimize your model with parameter optimization and feature selection.

Maybe you can check this link to find some more informations.

Hope it helps !

AdriBento
  • 589
  • 5
  • 16
  • Okey understood. Now that you sent me that link, I would like to ask you the meaning of this sentence (found in the link): "...performed an extensive 10-fold cross validation exercise to tune the parameters and fit the final model." How is this 10-fold CV used to tune the parameters, you look for the hyperparameters that best fit each iteration of the 10-fold CV, and then you average them or something? – sdiabr Mar 30 '18 at 06:27
  • Sorry I'm like you on this point, I don't really understand this sentence. – AdriBento Mar 30 '18 at 08:57