3

I want to use template matching in OpenCV to get the similarity of two images. As we all know,template matching is usually used to find smaller image parts in a bigger one. Here is my question. I find when template image and source image are same-sized, the result matrix get from function matchTemplate() is always 0, even if the two images are exactly the same one.

Can template matching in OpenCV deal with two same-sized images?

wjz2047
  • 123
  • 2
  • 7
  • 1
    Could you show your code? I tried the official demo, it worked well when two pictures were the same. – pwwpche Jan 23 '15 at 03:21
  • @pwwpche, my code comes from opencv tutorial[link]http://docs.opencv.org/doc/tutorials/imgproc/histograms/template_matching/template_matching.html. I don't know why i failed. – wjz2047 Jan 23 '15 at 06:48
  • @pwwpche,I figured it out. When i commented this following code, " normalize( result, result, 0, 1, NORM_MINMAX, -1, Mat() );", it worked well. But, could you explain the reason to me ? Thanks. – wjz2047 Jan 23 '15 at 09:00
  • Template matching works fair enough for same-sized images, what is the error you're getting? Sample code/images would be helpful. – a-Jays Jan 23 '15 at 09:06
  • `normalize( result, result, 0, 1, NORM_MINMAX, -1, Mat() )` means that the `result` matrix will be masked by `Mat()` (so it will be masked by nothing), and scaled so that minimum value in the matrix is `0` and maximum value in the matrix is `1`. Still, this can't explain your problem, so please paste your image here and I will check what is the result matrix you've got. – pwwpche Jan 23 '15 at 09:28
  • @pwwpche, I can't paste my image because of low repulation.In fact, you can try any image.try the famous baboon.jpg for example. – wjz2047 Jan 24 '15 at 07:23

1 Answers1

0

Perhaps I should apologize first: the value of the matrix is indeed zero after normalization, as long as the two pictures are of the same size. I was wrong about that:)

Check out this page: OpenCV - Normalize

Part of the OpenCV source code:

void cv::normalize( InputArray _src, OutputArray _dst, double a, double b,
                    int norm_type, int rtype, InputArray _mask )
{
    Mat src = _src.getMat(), mask = _mask.getMat();

    double scale = 1, shift = 0;
    if( norm_type == CV_MINMAX )
    {
        double smin = 0, smax = 0;  //Records the maximum and minimum value in the _src matrix
        double dmin = MIN( a, b ), dmax = MAX( a, b );
        minMaxLoc( _src, &smin, &smax, 0, 0, mask );  //Find the minimum and maximum value
        scale = (dmax - dmin)*(smax - smin > DBL_EPSILON ? 1./(smax - smin) : 0);
        shift = dmin - smin*scale;
    }

    //...

    if( !mask.data )
        src.convertTo( dst, rtype, scale, shift );
    else
    {
        //...
    }
}

Since there is only one element in the result array, smin = smax = result[0][0]

scale = (dmax - dmin)*(smax - smin > DBL_EPSILON ? 1./(smax - smin) : 0);
      =  (1 - 0 ) * (0) = 0
shift = dmin - smin*scale
      = 0 - result[0][0] * 0
      = 0

After that, void Mat::convertTo(OutputArray m, int rtype, double alpha, double beta) uses the following formula: (saturate_cast has nothing to do with your problem, so we can ignore it for now.)

convertTo

When you call normalize( result, result, 0, 1, NORM_MINMAX, -1, Mat() ), whatever the element in the matrix is, it will execute src.convertTo( dst, rtype, scale, shift ); with scale = 0, shift = 0. In this convertTo function,

alpha = 0, beta = 0
result[0][0] = result[0][0] * alpha + beta
             = result[0][0] * 0 + 0
             = 0

So, whatever the value in the result matrix is:

As long as the image and the template are of the same size, size of the result matrix will be 1x1, and after normalization, the result matrix will become a [0].

pwwpche
  • 621
  • 1
  • 5
  • 20