I am trying get the evaluation loss metrics on detectron2 using COCOEvaluator. However, there is around 40k+ dataset in the evaluation folder which caused it to take around 45 minutes per evaluation. The dataset is downloaded from COCO website itself.
[09/07 23:58:44 d2.data.datasets.coco]: Loaded 40504 images in COCO format from annotations/instances_val2014.json
[09/07 23:58:51 d2.evaluation.evaluator]: Start inference on 40504 batches
[09/07 23:58:56 d2.evaluation.evaluator]: Inference done 11/40504. Dataloading: 0.0003 s/iter. Inference: 0.0667 s/iter. Eval: 0.0002 s/iter. Total: 0.0673 s/iter. ETA=0:45:24
...
I used register_coco_instance to register for my train and test dataset.
register_coco_instances(name=train_dataset_name, metadata={}, json_file=train_json_annotation_path, image_root=train_images_path)
register_coco_instances(name=test_dataset_name, metadata={}, json_file=test_json_annotation_path, image_root=test_images_path)
Is there anyway to evaluate a subset of the data (e.g. 5k) from the whole 40k+ datasets?