I'm confused on how to properly do k-fold cross validation because I've seen it done two ways:
The first way is where you split the data set into k partitions, one for testing, one for validation, and the rest for training. Each partition of the data ends up being used for validation and testing exactly once.
The second way is where you split the data set into two partitions, one for testing and one for training/validation. Then you partition the training/validation set into k partitions, one for training and one for validation. Each partition of the data in the training/validation set ends up being used for validation exactly once. The testing set remains the same for each cross validation iteration.
Which method here is correct and why? Or are they both valid?
Edit: The question you linked to as a duplicate does not answer the question. I'm asking about the validity of two potential cross validation methods.
The linked question is asking about the order of using the training, validation, and testing sets in various validation methods (holdout, something else, and the 2nd cross validation approach I described above).
I see that the second approach is valid now because that was mentioned and answered. But what about the first method I described?