7

I'am trying to find small picture in a big picture and used MatchTemplate()

img = cv2.imread("c:\picture.jpg")
template = cv2.imread("c:\template.jpg")

result = cv2.matchTemplate(img,template,cv2.TM_CCOEFF_NORMED)
y,x = np.unravel_index(result.argmax(), result.shape)

Works fine I always get coords of top left corner, but it's only one point. If I have a multiple matches on big picture, how i can get all of them ?

user2046488
  • 233
  • 3
  • 4
  • 6
  • See https://stackoverflow.com/questions/61779288/how-to-template-match-a-simple-2d-shape-in-opencv/61780200#61780200 and https://stackoverflow.com/questions/67368951/opencv-matchtemplate-and-np-where-keep-only-unique-values/67374288#67374288 – fmw42 Jul 05 '22 at 14:44

3 Answers3

8

Here's how:

result = cv2.matchTemplate(img, template, cv2.TM_SQDIFF)

#the get the best match fast use this:
(min_x, max_y, minloc, maxloc) = cv2.minMaxLoc(result)
(x,y) = minloc

#get all the matches:
result2 = np.reshape(result, result.shape[0]*result.shape[1])
sort = np.argsort(result2)
(y1, x1) = np.unravel_index(sort[0], result.shape) # best match
(y2, x2) = np.unravel_index(sort[1], result.shape) # second best match

This is note the fastest way as the above sorts all the matches, even the totally wrong ones. If the performance matters to you, you can use the bottleneck's partsort function instead.

shantanoo
  • 3,617
  • 1
  • 24
  • 37
b_m
  • 1,473
  • 2
  • 18
  • 29
  • Thanks! I have two questions 1. Why did you use CV_TM_SQDIFF method? 2. Please explain this string (min_x,max_y,minloc,maxloc) = cv2.minMaxLoc(result) – user2046488 Feb 06 '13 at 13:16
  • I used CV_TM_SQDIFF because I've modified my own code; the minMax line is left there by mistake; but that is the fast procedure for getting the best match, I'll edit the answer – b_m Feb 06 '13 at 13:36
  • could you please explain the code more? why `result.shape[0]*result.shape[1]` ? – Flash Thunder Mar 13 '19 at 10:36
  • This answer has problem as stated in the other answer, it returns too many unnecessary matches – gameon67 Aug 07 '20 at 05:16
6

@b_m's Answer will work, but it will find way too many matches. The matching process slides the template across the image, comparing at EVERY PIXEL. (Or almost every pixel. the scan area is reduced by the size of the template). This means that in the vicinity of a good match, you get lots of other matches that are one pixel off. If you make an image of the matching results, you can see that you get lots of matches.

import cv2
import numpy as np

image = cv2.imread('smiley.png', cv2.IMREAD_COLOR )
template = cv2.imread('template.png', cv2.IMREAD_COLOR)

h, w = template.shape[:2]

method = cv2.TM_CCOEFF_NORMED

threshold = 0.95

res = cv2.matchTemplate(image, template, method)
# min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(res)

cv2.imwrite('output.png', 255*res)

input image:

enter image description here

use eyes as templates:

enter image description here

And look at the output. There are lots of white pixels near both eyes. You'll get quite a few high scoring answers:

enter image description here

An alternative way to find multiple copies of a template in the same image is to modify the image by writing over the found areas, and then matching again. But even better than that is to modify the results and re-running minMaxLoc. Both techniques are demonstrated in this answer.

bfris
  • 5,272
  • 1
  • 20
  • 37
1

You'll want to avoid using cv2.minMaxLoc(result) as this finds the single best result. What we want are multiple good results that are above a threshold.

Using non-maximum suppression is one way to find multiple matches within an image.

  import cv2
  from imutils.object_detection import non_max_suppression # pip install imutils
  
  # Load the image and template
  image = cv2.imread(img_path, cv2.IMREAD_COLOR)
  template = cv2.imread(template_path, cv2.IMREAD_COLOR)
  
  # Perform template matching 
  result = cv2.matchTemplate(image, template, cv2.TM_CCOEFF_NORMED)
  
  # Filter results to only good ones
  threshold = 0.90 # Larger values have less, but better matches.
  (yCoords, xCoords) = np.where(result >= threshold)

  # Perform non-maximum suppression.
  template_h, template_w = template.shape[:2]
  rects = []
  for (x, y) in zip(xCoords, yCoords):
    rects.append((x, y, x + template_w, y + template_h))
  pick = non_max_suppression(np.array(rects))

  # Optional: Visualize the results
  for (startX, startY, endX, endY) in pick:
    cv2.rectangle(image, (startX, startY), (endX, endY),(0, 255, 0), 2)
  cv2.imshow('Results', image)

Explanation:

We do non-max suppression because there will be a lot of 'good' results around each match in the original image (e.g. shifting the template by 1 pixel often gives a good result, so you'd get a whole bunch of overlapping bounding boxes around each instance of the object in the original image). Non-max suppression will filter to one good match per image region.

Note that this non-max suppression doesn't use the scores directly so this answer mentioned above may be better suited when using matchTemplate.

Marc Stogaitis
  • 789
  • 5
  • 6