1

I am trying to have a video stream taking one frame and then using smaller sub sections of that image to look for the same objects in the next frame.

When using synthetic data in a controlled environment it works well.

When I actually take real pictures the matches are not what I am looking for.

I am using Cross correlation and normalization but I believe that my lighting conditions are playing a part for the false matches.

How can I get around this hurdle? Am I using the wrong function?

Any information would be helpful

JoeCodeCreations
  • 660
  • 2
  • 6
  • 19
  • 1
    to normalize lighting conditions , look at equalizeHist, CLAHE, and bioinspired::Retina . also, you could try other features than pixels, like lbph. template matching is just the most primitive tool in the box. – berak Jan 29 '15 at 14:31
  • 1
    welcome to the real world, which makes computer vision hard sometimes ;) normalization is a nice technique to handle lighting conditions, but don't forget to use the same normalization when you create your template. – Micka Jan 29 '15 at 14:43
  • 1
    The difference of object's appearance between two neighbouring frames will be negligible (assuming the frequency of sampling is high enough), but may be significant for long periods of time. So updating the template (of the smaller region) iteratively is also recommended. – Kornel Jan 29 '15 at 15:01

0 Answers0