I have coco-style anonation json and a folder with the relevant images. In my annotations i have not only the bounding-boxes (as expected for YOLO) but also masks (which, as far as I understand, should provide better segmentation because they are more precise that bboxes).
I understand that I need to convert the coco-style json to a txt file per image and seperate it to train val folders, as described for example by @Mike B here, where each row represents one object and has five values: class, x_center, y_center, width, height
.
But, I am not sure if I there is a YOLO version where I can do semantic segmentation using the masks that I have annotated (again, using custom classes)? I am also not sure if the masks are necessary, given the good results of YOLO?
Any advice in this direction will be appreciated.