I have a pre-trained model weight (as .pth
) and it's configuration (as .yaml
) and I want to fine-tune this model on my downstream task. The only problem is that I have 1 class while the pre trained model has 5 classes and when I have fine tuned my model with Detectron2
, it gives me results for all the 5 classes instead of my 1 class. How can I deal with that scenario?
This is the exact tutorial which I am following but instead of training my classes on all 5 classes as thing_classes= ['None','text', 'title', 'list', 'table', 'figure']
, I want to train just on one class as [text
]. Author has answered but it did not help me as when I got the results during testing, I got results for all the 5 classes.
Pre-trained Model Weight Pre- trained Model Config
I have put 'category_id'
of every instance as 0 (because I have just 1 class).
Below is the code where I have registered the data and everything and there is no problem with training, model trains well
from detectron2.config import get_cfg
from detectron2.engine import DefaultPredictor, DefaultTrainer
!wget -O ./faster_rcnn_R_50_FPN_3x.pth 'https://www.dropbox.com/s/dgy9c10wykk4lq4/model_final.pth?dl=1'
!wget -O ./faster_rcnn_R_50_FPN_3x.yaml 'https://www.dropbox.com/s/f3b12qc4hc0yh4m/config.yml?dl=1'
cfg = get_cfg()
cfg.MODEL.ROI_HEADS.NUM_CLASSES = 1 # Just one class predictions
cfg.merge_from_file("./faster_rcnn_R_50_FPN_3x.yaml")
cfg.MODEL.WEIGHTS= './faster_rcnn_R_50_FPN_3x.pth' # layout parser Pre trained weights
cfg.SOLVER.IMS_PER_BATCH = 4
cfg.SOLVER.BASE_LR = 0.0025
cfg.SOLVER.MAX_ITER = 50 #adjust up if val mAP is still rising, adjust down if overfit
cfg.SOLVER.GAMMA = 0.05
cfg.MODEL.ROI_HEADS.BATCH_SIZE_PER_IMAGE = 4
cfg.DATASETS.TRAIN = (Data_Resister_training,)
trainer = DefaultTrainer(cfg)
trainer.resume_or_load(resume=False)
trainer.train()