0

Suppose that I have an array of sensors that allows me to come up with an estimate of my pose relative to some fixed rectangular marker. I thus have an estimate as to what the contour of the marker will look like in the image from the camera. How might I use this to better detect contours?

The problem that I'm trying to overcome is that sometimes, the marker is occluded, perhaps by a line cutting across it. As such, I'm left with two contours that if merged, would yield the marker. I've tried opening and closing to try and fix the problem, but it isn't robust to the different types of lighting.

One approach that I'm considering is to use the predicted contour, and perform a local convolution with the gradient of the image, to find my true pose.

Any thoughts or advice?

Nezo
  • 567
  • 4
  • 18

1 Answers1

0

The obvious advantage of having a pose estimate is that it restricts the image region for searching your target.

Next, if your problem is occlusion, you then need to model that explicitly, rather than just try to paper it over with image processing tricks: add to your detector objective function a term that expresses what your target may look like when partially occluded. This can be either an explicit "occluded appearance" model, or implicit - e.g. using an algorithm that is able to recognize visible portions of the targets independently of the whole of it.

Francesco Callari
  • 11,300
  • 2
  • 25
  • 40