I am trying to have a video stream taking one frame and then using smaller sub sections of that image to look for the same objects in the next frame.
When using synthetic data in a controlled environment it works well.
When I actually take real pictures the matches are not what I am looking for.
I am using Cross correlation and normalization but I believe that my lighting conditions are playing a part for the false matches.
How can I get around this hurdle? Am I using the wrong function?
Any information would be helpful