I’m a making an iOS application that will find instances of a smaller (similar) image inside a larger image. For example, something like:
The image we are searching inside
The image we are searching for
The matched image
The main things to consider are, the smallImage
size will match the size of the target in the bigImage
, but the object may be slightly obscured in the bigImage (as in they won’t always be identical). Also, the images I am dealing with are quite a bit smaller than my examples here, the image that I am trying to match (the smallImage) is between 32 x 32 pixels and 80 x 80 pixels, and the big image around 1000 x 600 pixels. Other than potentially being slightly obscured, the smallImage will match the object in the big image in every way (size, color, rotation etc..)
I have tried out a few methods using OpenCV. Feature matching didn’t seem accurate enough and gave me hundreds of meaningless results, so I am trying template matching. My code looks something like:
cv::Mat ref = [bigImage CVMat];
cv::Mat tpl = [smallImage CVMat];
cv::Mat gref, gtpl;
cv::cvtColor(ref, gref, CV_LOAD_IMAGE_COLOR);
cv::cvtColor(tpl, gtpl, CV_LOAD_IMAGE_COLOR);
cv::Mat res(ref.rows-tpl.rows+1, ref.cols-tpl.cols+1, CV_32FC1);
cv::matchTemplate(gref, gtpl, res, CV_TM_CCOEFF_NORMED);
cv::threshold(res, res, [tolerance doubleValue], 1., CV_THRESH_TOZERO);
double minval, maxval, threshold = [tolerance doubleValue];
cv::Point minloc, maxloc;
cv::minMaxLoc(res, &minval, &maxval, &minloc, &maxloc);
if (maxval >= threshold) {
// match
bigImage
is the large image in which we are tryign to find the targetsmallImage
is the image we are looking for within the bigImagetolerance
is the tolerance for matches (between 0 and 1)
This does work, but there are a few issues.
I originally tried using a full image of the image object that I am trying to match (ie; an image of the entire fridge), but I found that it was very inaccurate, when the tolerance was high it found nothing, and when it was low it found many incorrect matches.
Next I tested out using smaller portions of the image, for example:
This increased the accuracy of finding the target in the big image, but also results in a lot of incorrect matches as well.
I have tried all the available methods for matchTemplate
from here, and they all return a large amount of false matches, except CV_TM_CCOEFF_NORMED
which returns less matches (but also mostly false matches)
How can I improve the accuracy of image matching using OpenCV in iOS?
Edit: I have googled loads, the most helpful posts are:
- Pattern Matching - Find reference object in second image
- Measure of accuracy in pattern recognition using SURF in OpenCV
- Template Matching
- Algorithm improvement for Coca-Cola can shape recognition
I can't find any suggestions on how to improve the accuracy