0

Reading from the ground truth, I have an initial bounding box. I then calculate a foreground mask and use cv2.goodFeaturesToTrack to get points I can follow using cv2.calcOpticalFlowPyrLK. I calculate the bounding box by taking the largest possible rectangle going through the out-most points (roughly as described here: # How to efficiently find the bounding box of a collection of points? )

However, every now and then I need to recalculate the goodFeaturesToTrack to avoid the person "losing" all the points over time.

Whenever recalculating, points may land on other people, if they stand within the bounding box of the person to track. They will now be followed, too. Due to that my bounding box fails to be of any use after such a recalculation. What are some methods to avoid such a behavior?

I am looking for resources and general explanations and not specific implementations.

Ideas I had

  1. Take the size of the previous bounding box divided by the current bounding box size into account and ignore if size changes too much.
  2. Take the previous white-fullness of the foreground mask divided by the current white-fullness of the foreground mask into account. Do not re-calculate the bounding box if the foreground mask is too full. Other people are probably crossing the box.
  3. Calculate a general movement vector for the bounding box from the median of all points calculated using optical flow. Alter the bounding box only within some relation to the vector to avoid rapid changes of the bounding box.
  4. Filter the found good features to track points using some additional metric.

In general I am looking for a method that calculates new goodFeaturesToTrack based stronger on the previous goodFeaturesToTrack or the points derived from that using optical flow, I guess.

Natan
  • 728
  • 1
  • 7
  • 23

0 Answers0