2

I am using Tensorflow Object Detection API to finetune a pretrained model from the model zoo for custom object detection. Once my model is converged I use eval_util.py with EvalConfig.metrics_set='open_images_V2_detection_metrics' to obtain the mAP(and class-specific APs) which lets me measure the quality of my model.

But just mAP is not enough for my purposes. For better analysis, I want to know the exact breakdown of my model's results into false positives, false negatives and true positives. I wish to be able to see this breakdown in terms of actual test images - that is, I want to see my test images being physically divided into those three groups, automatically.

How can I do that?

I tried searching through Tensorflow's offical documentation and, to some extent, through the relevant python files on github, but I haven't found a way yet.

bappak
  • 865
  • 8
  • 23
  • Did you find any solution to this? I ran into this same problem. – nirvair Nov 15 '18 at 13:37
  • Does this answer your question? [What's the correct way to compute a confusion matrix for object detection?](https://stackoverflow.com/questions/46110545/whats-the-correct-way-to-compute-a-confusion-matrix-for-object-detection) – Emanuel Huber Sep 14 '20 at 18:53

2 Answers2

0

I think what you are looking for is a confusion matrix. Take a look at this link: Tensorflow Confusion Matrix

You can basically evaluate your predictions with this function.

Smokrow
  • 241
  • 3
  • 9
0

We also meet this problem. Now we find some clues in object_detection/utils/metrics.py. Maybe you can have a try. Hope you can share your solutions!