3

I’m currently working on pattern recognition using SURF in OpenCV. What do I have so far: I’ve written a program in C# where I can select a source-image and a template which I want to find. After that I transfer both pictures into a C++-dll where I’ve implemented a program using the OpenCV-SURFdetector, which returns all the keypoints and matches back to my C#-program where I try to draw a rectangle around my matches.

This pictures shows a source-image and a template with it's keypoints and matches. Also I've tried to calculate a rectangle around my matches

Now my question: Is there a common measure of accuracy in pattern recognition? Like for example number of matches in proportion to the number of keypoints in the template? Or maybe the size-difference between my match-rectangle and the original size of the template-image? What are common parameters that are used to say if a match is a “real” and “good” match?

Edit: To make my question clearer. I have a bunch of matchpoints, that are already thresholded by minHessian and distance value. After that I draw something like a rectangle around my matchpoints as you can see in my picture. This is my MATCH. How can I tell now how good this match is? I'm already calculating angle, size and color differences between my now found match and my template. But I think that is much too vague.

Mickey
  • 943
  • 1
  • 19
  • 41

3 Answers3

3

I am not 100% sure about what you are really asking, because what you call a "match" is vague. But since you said you already matched your SURF points and mentionned pattern recognition and the use of a template, I am assuming that, ultimately, you want to localize the template in your image and you are asking about a localization score to decide whether you found the template in the image or not.

This is a challenging problem and I am not aware that a good and always-appropriate solution has been found yet.

However, given your approach, what you could do is analyze the density of matched points in your image: consider local or global maximas as possible locations for your template (global if you know your template appears only once in the image, local if it can appear multiple times) and use a threshold on the density to decide whether or not the template appears. A sketch of the algorithm could be something like this:

  1. Allocate a floating point density map of the size of your image
  2. Compute the density map, by increasing by a fixed amount the density map in the neighborhood of each matched point (for instance, for each matched point, add a fixed value epsilon in the rectangle your are displaying in your question)
  3. Find the global or local maximas of the density map (global can be found using opencv function MinMaxLoc, and local maximas can be found using morpho maths, e.g. How can I find local maxima in an image in MATLAB?)
  4. For each maxima obtained, compare the corresponding density value to a threshold tau, to decide whether your template is there or not

If you are into resarch articles, you can check the following ones for improvement of this basic algorithm:

EDIT: another way to address your problem is to try and remove accidently-matched points in order to keep only those truly corresponding to your template image. This can be done by enforcing a constraint of consistancy between close matched points. The following research article presents an approach like this: "Context-dependent logo matching and retrieval", by H.Sahbi, L.Ballan, G.Serra, A.Del Bimbo, 2010 (however, this may require some background knowledge...).

Hope this helps.

Community
  • 1
  • 1
BConic
  • 8,750
  • 2
  • 29
  • 55
  • Yes, that's what I'm asking for. I'll take a look at your algorithm and will read those two pdf's to see if this helps me out. Thanks so far :) – Mickey Feb 14 '14 at 09:07
  • Your algorithm is a good idea. I can already calculate scale and angle differences, so I'll let the user set threshold for those two parameters and also the density and if all values are beneath those threshold I'll return if I've found a match or not. The bounty is open three more days, I'll will try this and if it solves my question I will let you know. – Mickey Feb 14 '14 at 14:29
  • Ok, note that the way you update the density map is an important thing to consider. My answer mentions that the size of the neighborhood where you increase the density map can be defined using the scale/orientation of the current matched point. However, this might generate multiple detections even for a single occurence of the template. Another legitimate approach may be to use the size of the template centered on the current point. Check what work best for you. – BConic Feb 14 '14 at 14:54
0

Well, when you compare points you use some metric.

So results or comparison have some resulting distance.

And the less this distance is the better.

Example of code:

BFMatcher matcher(NORM_L2,true);
vector<DMatch> matches;
matcher.match(descriptors1, descriptors2, matches);
matches.erase(std::remove_if(matches.begin(),matches.end(),bad_dist),matches.end());

where bad_dist is defined as

bool dist(const DMatch &m) {
    return m.distance > 150;
}

In this code i get rid of 'bad' matches.

silver_rocket
  • 418
  • 1
  • 4
  • 15
  • My code has two parameters: minHessian and distance. The second does exactly what you say. This is already one aspect I'm considering. – Mickey Feb 07 '14 at 14:47
  • Yep. The greater the value of `minHessian` you choose, more robust points you will get, so recognition can the enhanced by this parameter also. However using distance value is a common approach too. – silver_rocket Feb 10 '14 at 08:53
  • I'm already using those two parameters. I guess I have to ask my question slightly different. Those two parameters say something about how good a MATCHPOINT is. My question was more like this: I have a bunch of matchpoints, that are already thresholded by minHessian and distance value. After that I draw something like a rectangle around my matchpoints as you can see in my picture above. This is my MATCH. How can I tell now how good this match is? I'm already calculating angle, size and color differences between my now found match and my template. But I think that is much too vague. – Mickey Feb 10 '14 at 09:19
  • Well you see, when you have already found an object on image (you calculate its contour / bounding rect) this means you admit that it is a good match already, cause you accept good points to construct an object. If you want to find out the difference between match and pattern you should know what will define this difference for you :) If this is just difference in Affine properties for you (angle, scaling factor etc.), this means you have to calculate differences in these properties, as you probably have already tried before. Sorry for vague answer, but this is it. – silver_rocket Feb 11 '14 at 09:39
0

There are many ways to match two patterns in the same image, actually it's a very open topic in computer vision, because there isn't a global best solution.

For instance, if you know your object can appear rotated (I'm not familiar with SURF, but I guess the descriptors are rotation invariant like SIFT descriptors), you can estimate the rotation between the pattern you have in the training set and the pattern you just matched with. A match with the minimum error will be a better match.

I recommend you consult Computer Vision: Algorithms and Applications. There's no code in it, but lots of useful techniques typically used in computer vision (most of them already implemented in opencv).

TiagoOliveira
  • 507
  • 6
  • 17