-1

The output that I get is just the reference image and no bounding box is seen in the output. I have tried this code from this website: https://www.sicara.fr/blog-technique/object-detection-template-matching

Here's the reference image Reference Image

Here are the templates: 1:

2

templates 2:

3

templates 3:

4

As compared to the website, using the code the output should look like this:

Expected Output:

Expected Output

I am expecting to have this output as discussed in the website, however, when I tried to run this code, nothing seems to be detected. Here is the code that I copied:

import cv2
import numpy as np

DEFAULT_TEMPLATE_MATCHING_THRESHOLD = 0.9

class Template:
    """
    A class defining a template
    """

    def __init__(self, image_path, label, color, matching_threshold=DEFAULT_TEMPLATE_MATCHING_THRESHOLD):
        """
        Args:
            image_path (str): path of the template image path
            label (str): the label corresponding to the template
            color (List[int]): the color associated with the label (to plot detections)
            matching_threshold (float): the minimum similarity score to consider an object is detected by template
                matching
        """
        self.image_path = image_path
        self.label = label
        self.color = color
        self.template = cv2.imread(image_path)
        self.template_height, self.template_width = self.template.shape[:2]
        self.matching_threshold = matching_threshold

image = cv2.imread("reference.jpg")

templates = [
    Template(image_path="Component1.jpg", label="1", color=(0, 0, 255), matching_threshold=0.99),
    Template(image_path="Component2.jpg", label="2", color=(0, 255, 0,) , matching_threshold=0.91),
    Template(image_path="Component3.jpg", label="3", color=(0, 191, 255), matching_threshold=0.99),


detections = []
for template in templates:
    template_matching = cv2.matchTemplate(template.template, image, cv2.TM_CCORR_NORMED)
    match_locations = np.where(template_matching >= template.matching_threshold)

    for (x, y) in zip(match_locations[1], match_locations[0]):
        match = {
            "TOP_LEFT_X": x,
            "TOP_LEFT_Y": y,
            "BOTTOM_RIGHT_X": x + template.template_width,
            "BOTTOM_RIGHT_Y": y + template.template_height,
            "MATCH_VALUE": template_matching[y, x],
            "LABEL": template.label,
            "COLOR": template.color
        }
        detections.append(match)

def compute_iou(boxA, boxB):
    xA = max(boxA["TOP_LEFT_X"], boxB["TOP_LEFT_X"])
    yA = max(boxA["TOP_LEFT_Y"], boxB["TOP_LEFT_Y"])
    xB = min(boxA["BOTTOM_RIGHT_X"], boxB["BOTTOM_RIGHT_X"])
    yB = min(boxA["BOTTOM_RIGHT_Y"], boxB["BOTTOM_RIGHT_Y"])
    interArea = max(0, xB - xA + 1) * max(0, yB - yA + 1)
    boxAArea = (boxA["BOTTOM_RIGHT_X"] - boxA["TOP_LEFT_X"] + 1) * (boxA["BOTTOM_RIGHT_Y"] - boxA["TOP_LEFT_Y"] + 1)
    boxBArea = (boxB["BOTTOM_RIGHT_X"] - boxB["TOP_LEFT_X"] + 1) * (boxB["BOTTOM_RIGHT_Y"] - boxB["TOP_LEFT_Y"] + 1)
    iou = interArea / float(boxAArea + boxBArea - interArea)
    return iou

def non_max_suppression(objects, non_max_suppression_threshold=0.5, score_key="MATCH_VALUE"):
    """
    Filter objects overlapping with IoU over threshold by keeping only the one with maximum score.
    Args:
        objects (List[dict]): a list of objects dictionaries, with:
            {score_key} (float): the object score
            {top_left_x} (float): the top-left x-axis coordinate of the object bounding box
            {top_left_y} (float): the top-left y-axis coordinate of the object bounding box
            {bottom_right_x} (float): the bottom-right x-axis coordinate of the object bounding box
            {bottom_right_y} (float): the bottom-right y-axis coordinate of the object bounding box
        non_max_suppression_threshold (float): the minimum IoU value used to filter overlapping boxes when
            conducting non-max suppression.
        score_key (str): score key in objects dicts
    Returns:
        List[dict]: the filtered list of dictionaries.
    """
    sorted_objects = sorted(objects, key=lambda obj: obj[score_key], reverse=True)
    filtered_objects = []
    for object_ in sorted_objects:
        overlap_found = False
        for filtered_object in filtered_objects:
            iou = compute_iou(object_, filtered_object)
            if iou > non_max_suppression_threshold:
                overlap_found = True
                break
        if not overlap_found:
            filtered_objects.append(object_)
    return filtered_objects
NMS_THRESHOLD = 0.2
detections = non_max_suppression(detections, non_max_suppression_threshold=NMS_THRESHOLD)
image_with_detections = image.copy()

for detection in detections:
    cv2.rectangle(
        image_with_detections,
        (detection["TOP_LEFT_X"], detection["TOP_LEFT_Y"]),
        (detection["BOTTOM_RIGHT_X"], detection["BOTTOM_RIGHT_Y"]),
        detection["COLOR"],
        2,
    )
    cv2.putText(
        image_with_detections,
        f"{detection['LABEL']} - {detection['MATCH_VALUE']}",
        (detection["TOP_LEFT_X"] + 2, detection["TOP_LEFT_Y"] + 20),
        cv2.FONT_HERSHEY_SIMPLEX, 0.5,
        detection["COLOR"], 1,
        cv2.LINE_AA,
    )

# NMS_THRESHOLD = 0.2
# detection = non_max_suppression(detections, non_max_suppression_threshold=NMS_THRESHOLD)

print("Image written to file-system: ", status)
cv2.imshow("res", image_with_detections)
cv2.waitKey(0)

this is how his final output looks like: 5

Here's my attempt in detecting the larger components, the code was able to detect them and here is the result: Result

Here are the resize templates and the original components that I wanted to detect but unfortunately can't:

1st 2nd 3rd

yessi
  • 11
  • 6
  • You have not shown your template! How can we comment. Perhaps you need a mask for the template. cv2.matchTemplate() permits the use of a mask. I do not recommend using TM_COEFF_NORMED. Use TM_SQDIFF, TM_SQDIFF_NORMED or TM_CORR_NORMED> – fmw42 Feb 08 '23 at 06:11
  • Already edited and attached the templates. I will try to search about the mask that you suggested, and will do try to change the method and use others instead. – yessi Feb 08 '23 at 06:56
  • Those type templates probably do not need a mask. – fmw42 Feb 08 '23 at 16:15
  • Post your original input without the green boxes so we can test your template against your image. – fmw42 Feb 08 '23 at 16:29
  • @fmw42 already updated the post with the reference image. – yessi Feb 08 '23 at 23:09
  • Your templates likely do not match because their scale (dimensions) do not match those in the image. One-shot Template matching is not scale or rotation invariant and so will not match if the template and image where it is supposed to match are not the same size. Also you are searching for matches above a threshold. What happens when your threshold is to high -- no matches. You should search for the max template match score and where it is located. See cv2.minMaxLoc() – fmw42 Feb 08 '23 at 23:29
  • https://www.sicara.fr/blog-technique/object-detection-template-matching - I have tried on running this code that the OP has provided. And here he has defined a function for non max suppression (which I added also in the post). Btw, thanks for pointing out about the scale and rotation invariant for one-shot template. – yessi Feb 09 '23 at 00:03
  • Yes, you would need non-max suppression. But if your threshold is too large, then you still get no matches. If you only need to match one region (not your case), then searching for the max score and its location avoids the threshold. When you need to match multiple regions (as in your image), then you would likely want to use a threshold. But then you have to tune the threshold for the image and how good your template will match. – fmw42 Feb 09 '23 at 00:32
  • I tried to scale the templates base on the image and tried to change the threshold, however, I am still not able to detect the small components. Though I am able to detect the 3 components that are big in size (see attached picture at the end part of the post). – yessi Feb 09 '23 at 04:15
  • The small ones are more blurry in your image and so may not match your scaled template. Have you tried to lower the threshold? Also have you changed your template method cv2.TM_CCOEFF_NORMED. TM_COEFF_xxx to one of the other methods that I suggested. – fmw42 Feb 09 '23 at 05:42
  • Yeah I did tried using different method such as TM_SQDIFF, TM_SQDIFF_NORMED, etc. and also tried tweaking the value of threshold for each method but the output is bad. So far, the closest to my previous result was the TM_CORR_NORMED. But the small components are not still detected, only the larger ones. – yessi Feb 09 '23 at 07:11
  • You do know that TM_SQDIFF, TM_SQDIFF_NORMED are best at 0 not 1. Do you need to threshold at a low value. Post your resized template that does not work. – fmw42 Feb 09 '23 at 17:33
  • Yes I did set it nearing value of 0, and I've attached the resized template. – yessi Feb 10 '23 at 05:45
  • Sorry, it looks like you found all 3 templates. Which one did you not find? Where have you posted its template? – fmw42 Feb 10 '23 at 16:48
  • @fmw42 sorry I got an error last time I tried to edit it, but the resize templates that I am referring to are now attached at the end of the post. – yessi Feb 13 '23 at 02:33
  • Are you still using TM_COEFF_NORMED? I suggested others that were better. If not using that, then please update your code to show your current code. Perhaps your threshold is still not low enough. – fmw42 Feb 13 '23 at 03:01
  • I downloaded your resized templates and they are large (800x500) with lots of white padding around the actual image. That may be why they do not work. Get rid of the white padding. – fmw42 Feb 13 '23 at 03:07
  • May I ask if the white padding that you mean is the white background itself? I identified the pixel size of the smaller components from the reference, and I made the size of the image same as the reference image size which is around 800x500 - I updated the code, I tried using CORR_NORMED as the template matching method but still it only detects the larger components and not the smaller ones. Also, when I tried TM_SQDIFF and TM_SQDIFF_NORMED, it can't detect the larger components accurately, a lot of bounding boxes are seen in my output and also it can't detect the smaller components as well – yessi Feb 13 '23 at 04:53
  • When your resize, you do not want to pad with white to the size you had before or the size of the reference image. You want it to only be the image that you had before for the template, but at smaller dimensions with no padding. When you do that, it should match with one of the methods that I suggested. Be sure that your non-max suppression is not eliminating your close together small items. Perhaps you are suppressing to close together items. A good test would be to turn off non-max suppression and search just for the best match. Then tune your threshold. – fmw42 Feb 13 '23 at 16:57
  • I have posted my approach that works to find your smallest template. See my answer below. – fmw42 Feb 14 '23 at 00:19
  • I have tried to redo the sizing of the template and it finally worked. Thank you for providing and explaining the code it was really useful and comprehensive. – yessi Feb 15 '23 at 02:34
  • Do you happen to have any idea on what approach should I use if ever I wanted to detect if there is a missing component? Example the A3 resistors that you have detected, what if there are only 3 resistors instead of 4 in the input compared to the reference pcb. – yessi Feb 15 '23 at 04:42
  • You do not have to use my method. Perhaps yours knows how to stop. Otherwise, with my method, you would set the stopping threshold so that it gets only the good ones. – fmw42 Feb 15 '23 at 04:49
  • Got it, will now try to work on other features on this. Grateful for your help and time. – yessi Feb 16 '23 at 00:12

1 Answers1

0

Here is a method of finding multiple matches in template matching in Python/OpenCV using your reference and smallest template. I have remove all the white padding you had around your template. My method simply draws a black rectangle over the correlation image where it matches and then repeats looking for the next best match in the modified correlation image.

I have used cv2.TM_CCORR_NORMED and a match threshold of 0.90. You have 4 of these templates showing in your reference image, so I set my search number to 4 and spacing of 10 for the non-maximum suppression by masking. You have other small items of the same shape and size, but the text on them is different. So you will need different templates for each.

Reference:

enter image description here

Template:

enter image description here

import cv2
import numpy as np

# read image
img = cv2.imread('circuit_board.jpg')

# read template
tmplt = cv2.imread('circuit_item.png')
hh, ww, cc = tmplt.shape

# set arguments
match_thresh = 0.90               # stopping threshold for match value
num_matches = 4                   # stopping threshold for number of matches
match_radius = 10                 # approx radius of match peaks
match_radius2 = match_radius//2

# get correlation surface from template matching
corrimg = cv2.matchTemplate(img,tmplt,cv2.TM_CCORR_NORMED)
hc, wc = corrimg.shape

# get locations of all peaks higher than match_thresh for up to num_matches
imgcopy = img.copy()
corrcopy = corrimg.copy()
for i in range(0, num_matches):
    # get max value and location of max
    min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(corrcopy)
    x1 = max_loc[0]
    y1 = max_loc[1]
    x2 = x1 + ww
    y2 = y1 + hh
    loc = str(x1) + "," + str(y1)
    if max_val > match_thresh:
        print("match number:", i+1, "match value:", max_val, "match x,y:", loc)
        # draw draw white bounding box to define match location
        cv2.rectangle(imgcopy, (x1,y1), (x2,y2), (255,255,255), 1)
        # insert black rectangle over copy of corr image so not find that match again
        corrcopy[y1-match_radius2:y1+match_radius2, x1-match_radius2:x1+match_radius2] = 0
        i = i + 1
    else:
        break
    
# save results
# power of 4 exaggeration of correlation image to emphasize peaks
cv2.imwrite('circuit_board_multi_template_corr.png', (255*cv2.pow(corrimg,4)).clip(0,255).astype(np.uint8))
cv2.imwrite('circuit_board_multi_template_corr_masked.png', (255*cv2.pow(corrcopy,4)).clip(0,255).astype(np.uint8))
cv2.imwrite('circuit_board_multi_template_match.png', imgcopy)


# show results
# power of 4 exaggeration of correlation image to emphasize peaks
cv2.imshow('image', img)
cv2.imshow('template', tmplt)
cv2.imshow('corr', cv2.pow(corrimg,4))
cv2.imshow('corr masked', cv2.pow(corrcopy,4))
cv2.imshow('result', imgcopy)
cv2.waitKey(0)
cv2.destroyAllWindows()

Original Correlation Image:

enter image description here

Modified Correlation Image after 4 matches:

enter image description here

Matches Marked on Input as White Rectangles:

enter image description here

Match Locations:

match number: 1 match value: 0.9982172250747681 match x,y: 128,68
match number: 2 match value: 0.9762057065963745 match x,y: 128,90
match number: 3 match value: 0.9755787253379822 match x,y: 128,48
match number: 4 match value: 0.963689923286438 match x,y: 127,107
fmw42
  • 46,825
  • 10
  • 62
  • 80
  • With your code here, it was able to detect the x and y coordinates of each component via template matching, what if the reading from the input image has a missing component let's say the bottom resistor is missing, is it possible to tell that bottom resistor is missing because nothing is detected at x=128 and y=68? – yessi Feb 20 '23 at 01:55
  • Not unless you store the expected locations and check against them (with some tolerance allowed) – fmw42 Feb 20 '23 at 02:32
  • This would take me some time to code this algorithm, so I'll try it first and come back here to update if I am able to do it. Always grateful for your input. – yessi Feb 20 '23 at 04:29