1

Questions about model selection using cross-validation

Let's say a dataset was split as training and testing sets. Multiple models were compared using cross-validation on training datasets.

In one scenario, some of models yielded exact same validation errors.

In another scenario, the models accuracy might rank as 99%, 98%, 97%, 95%,90%.......etc.

For these two scenarios, could you please advise how and why to choose a model?

I understand test dataset is just designed to evaluate generalization error. But for the scenarios above, whether it is time to use test dataset to evaluate those models.

merv
  • 67,214
  • 13
  • 180
  • 245
derec
  • 101
  • 1
  • 9

0 Answers0