0

I'm new to deep learning and trying cell segmentation with Detectron2 Mask R-CNN. I use the images and mask images from http://celltrackingchallenge.net/2d-datasets/ - Simulated nuclei of HL60 cells - the training dataset. The folder I am using is here

I tried to create and register a new dataset following balloon dataset format in detectron2 colab tutorial. I have 1 class, "cell".

My problem is, after I train the model, there are no masks visible when visualizing predictions. There are also no bounding boxes or prediction scores. A visualized annotated image is like this but the predicted mask image is just a black background like this.

What could I be doing wrong? The colab I made is here

2 Answers2

1

I have a problem similar to yours, the network predicts the box and the class but not the mask. The first thing to note is that the algorithm automatically resizes your images (DefaultTrainer), so you need to create a custom mapper to avoid this. Second thing is that you need to create a data augmentation, using which you significantly improve your convergence and generalization.

First, avoid the resize:

cfg.INPUT.MIN_SIZE_TRAIN = (608,) 
cfg.INPUT.MAX_SIZE_TRAIN = 608
cfg.INPUT.MIN_SIZE_TRAIN_SAMPLING = "choice"
cfg.INPUT.MIN_SIZE_TEST = 608
cfg.INPUT.MAX_SIZE_TEST  = 608

See too: https://gilberttanner.com/blog/detectron-2-object-detection-with-pytorch/

How to use detectron2's augmentation with datasets loaded using register_coco_instances

https://eidos-ai.medium.com/training-on-detectron2-with-a-validation-set-and-plot-loss-on-it-to-avoid-overfitting-6449418fbf4e

Christoph Rackwitz
  • 11,317
  • 4
  • 27
  • 36
0

I found a current workaround by using Matterport Mask R-CNN and the sample nuclei dataset instead: https://github.com/matterport/Mask_RCNN/tree/master/samples/nucleus