2

I want to evaluate my model using K-Fold Cross Validation (k=5). This means that dataset must be split in 5 parts: p1,p2,p3,p4,p5 and then:

(run1) Test: p1,p2,p3,p4 Eval: p5

(run2) Test: p1,p2,p3,p4 Eval: p4

(run3) Test: p1,p2,p4,p5 Eval: p3

(run4) Test: p1,p3,p4,p5 Eval: p2

(run5) Test: p2,p3,p4,p5 Eval: p1

At the end, I calculate the average mean among all the evaluations.

This is essentially K-Fold Cross validation. Right now, what I am doing is to regenerate .tf records each time and then to run the evaluation phase for each one of these combinations. Is there a way to automatize all this procedure? Please note that there is no specific code. In fact, each time I regenerate the tf records. This is what should be automatized as well.

Giacomo Bartoli
  • 730
  • 3
  • 9
  • 23
  • tensorflow object detection trained model is already giving a confidence score, you dont have to do k fold cross validation on top of it. I dont see this serving any purpose in case of object detection. Can you share more details what.why you are trying to do this? – Srinivas Bringu Aug 25 '18 at 21:44
  • 1
    Starting from a trained network on VOC, I'm retraining the last layers using a small dataset made by me. As far as I know, when you have a small dataset k-fold cross validation is the right technique for evaluation. – Giacomo Bartoli Aug 26 '18 at 09:06
  • 1
    That is for general machine learning models, you don't do that with object detection (deep learning models for image processing). The way to evaluate is manual testing with your test data and look at the detection's, bounding boxes and confidence score – Srinivas Bringu Aug 26 '18 at 14:29
  • OK, I get it. Thanks a lot @SrinivasBringu – Giacomo Bartoli Aug 26 '18 at 15:45

0 Answers0