1

My project is centered on positioning of several small objects with a stationary camera. I've drawn some crisp, simple graphical pattern images (like this), printed them out and try to detect them in the image. My straightforward method:

  1. color masking and blob detecting for primary segmentation. Get possible positions of the patterns [this works fine, I guess]
  2. run a SIFT/SURF/ORB detection on these small image patches to compare them with a pattern stored in file.
  3. collect coordinates of the keypoints in found matches, then compute homology and get the precise position/rotation of the pattern in the image

I write on Python, with OpenCV. My initcode for ORB + BFMatcher is:

pt_detector = cv2.ORB()
pt_matcher = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True)

And for SIFT + FLANN I write:

pt_detector = cv2.SURF(400)
FLANN_INDEX_KDTREE = 0
index_params = dict(algorithm=FLANN_INDEX_KDTREE, trees=5)
search_params = dict(checks=50)
pt_matcher = cv2.FlannBasedMatcher(index_params, search_params)

And then I go simply:

kp_r, des_r = pt_detector.detectAndCompute(pattern, None)
kp_o, des_o = pt_detector.detectAndCompute(obj_res, None)
matches = pt_matcher.match(des_r, des_o)

The problem: detectors find matches all over the sample and template, and though they generally manage to detect the pattern, the orientation is messed up.

Here's an example of matching between the template image (left) and actually found pattern in the camera frame (right). The camera image is masked and thresholded, of course. These 10 best matches from SIFT+FLANN are plain terrible. I've tried SURF, SIFT with FLANN matcher and ORB+BFMatcher on default settings, with no results. I suppose the problem is in the parameters of descriptor and matcher.

Can anyone tell me how should I try to set up the descriptor and matcher for robust matching of simple patterns? Or maybe there is another approach for this task?

Nolemocius
  • 21
  • 4
  • the image given on the right is not as **SAME** as the one on the left. It is chopped off around the edges. Hence, I would say OpenCV did a pretty good job. Try rotating the image on the left by a certain angle and run the same code – Jeru Luke Feb 08 '17 at 15:34
  • SIFT and ORB are supposed to be rotation invariant, aren't they? I can play with bracketing/masking for the right image so that it has more space around. But the quality won't be much better. This is why I use simple patterns, distinguishable in low-quality. What is the right way to compare the low-quality pattern picture to its high-quality original? – Nolemocius Feb 10 '17 at 15:01
  • They are supposed to be rotation invariant, but the images you haves used are not the SAME. The second image is rotated but blurry and has edges chopped off. Try performing the same for another image that is only rotated... – Jeru Luke Feb 10 '17 at 16:14
  • What was your motivation for using 10 matches (rather than any other number)? What happens if you overplot *all* keypoints? – jtlz2 Jul 09 '18 at 09:02

0 Answers0