0

I have a dataset of 900 images, distributed across 6 classes, with 150 images per class. To develop a classifier and assess its performance, I will utilize k-fold cross-validation. In this case, I will employ 3-fold cross-validation.

For each fold, I will allocate 70% of the data for training purposes, leaving the remaining 30% for testing. Consequently, within the training partition, there will be 105 images per class. However, during the training phase, I will only select 20 images per class to train the model. When evaluating the model, I will assess its performance on the entire test partition.

To report the overall performance of the model, I will calculate the average test accuracy across the 3 folds and present this averaged accuracy as the final performance metric.

Despite utilizing a subset of the training partition, this approach can still be referred to as "3-fold cross validation"?

noone
  • 6,168
  • 2
  • 42
  • 51

0 Answers0