I am using Tensorflow Object Detection API to finetune a pretrained model from the model zoo for custom object detection. Once my model is converged I use eval_util.py
with EvalConfig.metrics_set='open_images_V2_detection_metrics'
to obtain the mAP
(and class-specific AP
s) which lets me measure the quality of my model.
But just mAP
is not enough for my purposes. For better analysis, I want to know the exact breakdown of my model's results into false positives, false negatives and true positives. I wish to be able to see this breakdown in terms of actual test images - that is, I want to see my test images being physically divided into those three groups, automatically.
How can I do that?
I tried searching through Tensorflow's offical documentation and, to some extent, through the relevant python files on github, but I haven't found a way yet.