0

I want to train an object detection model to detect chickens that are only upon that water reservoir (like the one on the picture), thus I'm currently annotating chickens in images. Since I don't want to detect any chickens on the field, I'm not annotating them. But I don't want to confuse my model by entering non annotated chickens when training it. Is there an problem to just add a cover(that will be black) like the one in the second image to all training and testing images?

Edit: I don't want to annotate chickens on the ground because each annotation costs me money. This is why I'm thinking on add this cover.

Not covered

Covered

João Gondim
  • 101
  • 1
  • 1
  • 9
  • I don't know how the blue box will impact the training, or if some other masking approach would work and I look forward to an answer to this question. However, one alternative approach I would consider is to have 2 classes and annotate those: chickens_on_reservoir and chickens_in_field.Then the model would be forced to differentiate. – j2abro Jan 19 '21 at 19:16
  • the point here is the cost (money) of making annotations. Since I only want chickens at one specific position, I'll waste time and money annotating objects I won't use. – João Gondim Jan 19 '21 at 19:48

1 Answers1

0

Train on one class, label as many pictures as you can, even using chickens on the ground. During deployment, filter the detection of the chickens of your interest based on the size of the bounding-box, apparently those of your interest are larger. Don't use covers

  • The problem here is that I don't want to spend money annotating chickens I won't use. Each bounding box for a chicken is a cost for me. Hence the cover idea. Why shouldn't I use them? – João Gondim Jan 21 '21 at 12:28
  • If your dataset is standard and if there is a part of the images that you are not interested in, then crop out all those parts and solve the problem upstream. Therefore, only label the chickens above the reservoir you are interested in. IMHO: maybe you can also avoid the use of deep learning and use classic and simpler image processing techniques – Francesco Rossi Jan 21 '21 at 13:55
  • dataset is yet to be fully annotated. I'm paying for each bounding box annotated. I have the option of cropping the image and use only the part with the reservoir, but I would loose the absolute distances that are made from the original image. These chicken images are not the actual problem, I'm not allowed to show the real images, but the ideia is the same: interested objects are in one part of the images whereas others appear, but are not what I want – João Gondim Jan 21 '21 at 15:03