0

I've read that once I've tuned my hyperparameters using k-fold cross-validation (on the training set), I should train my model on the entire training set and then evaluate my model on the test set.

However, doesn't this again introduce the problem that cross-validation is trying to solve, which is that the testing set may not be representative of the entire dataset?

Tejas_hooray
  • 606
  • 1
  • 6
  • 16

0 Answers0