The assignment is to write a simple ML program that trains and predicts on a dataset of our choice. I want to determine the best model for my data. The response is a class (0/1). I wrote code to try different cross-validation methods (validation set, leave-one-out, and k-fold) on multiple models (linear regression, logistic regression, k-nearest neighbors, linear discriminant analysis). Per model, I report the MSE for each cross-validation method and track the lowest one. I then pick the model with the lowest tracked MSE. This is where I think I went wrong. If I am cross-validating multiple models, should I use the same cross-validation method?
Asked
Active
Viewed 36 times