0

I'm a newbie to computer vision, and I'm trying to detect all the test strips in this image:

source

The result I'm trying to get:

result

I assume it should be very easy, because all the target objects are in rectangular shape and have a fixed aspect ratio. But I have no idea which algorithm or function should I use.

I've tried edge detection and the 2D feature detection example in OpenCV, but the result is not ideal. How should I detect these similar objects but with small differences?

Update:

The test strips can vary in colors, and of course, the shade of the result lines. But they all have the same references lines, as showing in the picture:

variations of test strips

I don't know how should I describe these simple features for object detection, as most examples I found online are for complex objects like a building or a face.

zhengyue
  • 1,268
  • 12
  • 18
  • 1
    just some sugestions, 1) try to eliminate the background (threshold). 2) find lines, maybe [hough line transform](http://docs.opencv.org/2.4/doc/tutorials/imgproc/imgtrans/hough_lines/hough_lines.html) can help, 3) find intersections of such lines 4) create rectangles with this intersections. Bonus) You may try to isolate each object (contour) from the rest (the rest = black) and analyse them 1 by 1 without any other data that can upset your results – api55 Jul 08 '17 at 19:07
  • Do you have any more images to see the variations in them? – Mark Setchell Jul 08 '17 at 20:51
  • @MarkSetchell Please see my updates – zhengyue Jul 09 '17 at 06:45
  • Can the background be a different colour, e.g. blue? Is the background always plain, or could it be a pattern sometimes? Is the number of strips fixed - or how many could there be - max? min? Are the strips always horizontal-ish? – Mark Setchell Jul 09 '17 at 07:57
  • @MarkSetchell The background can be a different color. Actually, I can choose the color / style of background if that could make detection easier. I hope the number of strips doesn't matter, but again, I can decide how many if that could make detection easier. As the direction of the strips, I hope it doesn't have to be lined up perfectly, but overall horizontal-ish is acceptable. – zhengyue Jul 09 '17 at 16:17

2 Answers2

2

The solution is not exact, but it provides a good starting point. You have to play with the parameters though. It would greatly help you if you partition the strips using some threshold method and then apply hough lines individually as @api55 mentioned.

Here are the results I got.

after laplacianafter thresholding and median filteringfinal image

Code.

import cv2
import numpy as np

# read image
img = cv2.imread('KbxN6.jpg')
# filter it
img = cv2.GaussianBlur(img, (11, 11), 0)
gray_img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)

# get edges using laplacian
laplacian_val = cv2.Laplacian(gray_img, cv2.CV_32F)

# lap_img = np.zeros_like(laplacian_val, dtype=np.float32)
# cv2.normalize(laplacian_val, lap_img, 1, 255, cv2.NORM_MINMAX)
# cv2.imwrite('laplacian_val.jpg', lap_img)

# apply threshold to edges
ret, laplacian_th = cv2.threshold(laplacian_val, thresh=2, maxval=255, type=cv2.THRESH_BINARY)
# filter out salt and pepper noise
laplacian_med = cv2.medianBlur(laplacian_th, 5)
# cv2.imwrite('laplacian_blur.jpg', laplacian_med)
laplacian_fin = np.array(laplacian_med, dtype=np.uint8)

# get lines in the filtered laplacian using Hough lines
lines = cv2.HoughLines(laplacian_fin,1,np.pi/180,480)
for rho,theta in lines[0]:
    a = np.cos(theta)
    b = np.sin(theta)
    x0 = a*rho
    y0 = b*rho
    x1 = int(x0 + 1000*(-b))
    y1 = int(y0 + 1000*(a))
    x2 = int(x0 - 1000*(-b))
    y2 = int(y0 - 1000*(a))
    # overlay line on original image
    cv2.line(img,(x1,y1),(x2,y2),(0,255,0),2)

# cv2.imwrite('processed.jpg', img)
# cv2.imshow('Window', img)
# cv2.waitKey(0)
harshkn
  • 731
  • 6
  • 13
1

This is an alternative solution by using the function findCountours in combination with canny edge detection. The code is based very slightly on this tutorial

import cv2
import numpy as np
import imutils

image = cv2.imread('test.jpg')
resized = imutils.resize(image, width=300)
ratio = image.shape[0] / float(resized.shape[0])

# convert the resized image to grayscale, blur it slightly,
# and threshold it
gray = cv2.cvtColor(resized, cv2.COLOR_BGR2GRAY)
edges = cv2.Canny(resized,100,200)
cv2.imshow('dsd2', edges)
cv2.waitKey(0)
cnts = cv2.findContours(edges.copy(), cv2.RETR_EXTERNAL,
    cv2.CHAIN_APPROX_NONE)
cnts = cnts[0] if imutils.is_cv2() else cnts[1]
sd = ShapeDetector()

# loop over the contours
for c in cnts:
    # compute the center of the contour, then detect the name of the
    # shape using only the contour
    M = cv2.moments(c)
    cX = int((M["m10"] / M["m00"]) * ratio)
    cY = int((M["m01"] / M["m00"]) * ratio)


    # multiply the contour (x, y)-coordinates by the resize ratio,
    # then draw the contours and the name of the shape on the image
    c = c.astype("float")
    c *= ratio
    c = c.astype("int")
    cv2.drawContours(image, [c], -1, (0, 255, 0), 2)
    #show the output image
    #cv2.imshow("Image", image)
    #cv2.waitKey(0)
cv2.imwrite("erg.jpg",image)

Result: enter image description here


I guess it can be improved by tuning following parameters:

It is maybe also usefull to filter small contours or merge contours which are close to each other.

TruckerCat
  • 1,437
  • 2
  • 16
  • 39