1

I have built a simple algorithm for visual mark detection with OpenCV on Python, that uses their ORB detector as the second step. I use ORB with the BFmatcher, the code is borrowed from this project: https://rdmilligan.wordpress.com/2015/03/01/road-sign-detection-using-opencv-orb/ The detection part in the code looks like this:

# find the keypoints and descriptors for object
kp_o, des_o = orb.detectAndCompute(obj,None)
if len(kp_o) == 0 or des_o == None: continue

# match descriptors
matches = bf.match(des_r,des_o)

Then there is a check on the number of feature matches, so it can tell if there is a match between the template image and the query. The question is: if yes, how do I get exact position and rotation angle of the found match?

Nolemocius
  • 21
  • 4

1 Answers1

-1

The position is already known at this step. It is stored in variables x and y. To find the rotation, blur both template and the source, then either generate 360 rotated representations of the blurred template and then find the one that has the smallest difference with the region of interest or convert both images to polar coordinates and try to shift one of the images to achieve the best math (the shift will be the angle you want to rotate by).

ivan_a
  • 613
  • 5
  • 12
  • What X and Y variables do you exactly mean? Also, doing the rotation comparison is like performing the match again, which looks somewhat redundant. – Nolemocius Jan 31 '17 at 17:27
  • Try to run the program that you provided the link for and check values of `x` and `y` variables during run time. – ivan_a Feb 01 '17 at 05:58