1

For my internship, I am trying to extract this type of aluminum wires of the acquired vision camera footage. The purpose is to extract those connections and classify them with machine learning. My idea is to extract those connections to remove all the noise (background) and analyze the gray value density of the bond, a fall in a gray value plot will indicate a broken wire.

I am very new with vision and I tried to really dive in edge detection and segmentation. My problem is that I cannot totally remove the noise what results in the following edge detection with the Canny operator. The Sobel operator result in too much noise.

This is the best what I achieved in the last days, hopefully you can help me with preprocessing this image before the Canny Operator but also tips in capturing the object can help. Because of limitations in the space and process where this vision is taken, physical additions to the vision camera is difficult but is appreciated, so still comment under what I can do to improve the acquisition.

My code:

import numpy as np
from PIL import Image
import cv2

blur = 3
canny_low = 15
canny_high = 230
min_area = 0.005
max_area = 0.025
dilate_iter = 10
erode_iter = 10
mask_color = (0.0,0.0,0.0)

image1 = cv2.imread(r"C:/Users/User/Pictures/vlcsnap-2022-11-21-15h59m05s146.png")
image2 = cv2.imread(r"C:/Users/User/Pictures/template2.png")

# function for object extraction from  background
def bgRemoval_seg(source, template):
    global blur, canny_low, canny_high, min_area, max_area, dilate_iter, erode_iter, mask_color
    # change source image and template in gray c
    source = cv2.cvtColor(source, cv2.COLOR_BGR2GRAY)
    template = cv2.cvtColor(template, cv2.COLOR_BGR2GRAY)
    # add gaussian filter for smooth blur
    source = cv2.GaussianBlur(source, (blur,blur), 0)
    template = cv2.GaussianBlur(template, (blur,blur), 0)
    # add bilateral Filter ro remove A LOT of noise ( I tried various values with this filter)
    source = cv2.bilateralFilter(source,7,100,100)
    template = cv2.bilateralFilter(template,7,100,100)
    

   # add adaptive contrast that increases the amount of contours around the object (also increases noise)
    clahe = cv2.createCLAHE(clipLimit=3.7, tileGridSize=(4,4))
    source = clahe.apply(source)
    template = clahe.apply(template)
    
    cv2.imshow("contrast", source)
    cv2.imshow("contrast2", template)
    
    # apply Canny Operator for edge detection
    edges1 = cv2.Canny(source, canny_low, canny_high)
    edges2 = cv2.Canny(template, canny_low, canny_high)
    
    #dilate and erode the image to remove more noise

    edges1 = cv2.dilate(edges1, None)
    edges2 = cv2.dilate(edges2, None)
    
    edges1 = cv2.erode(edges1, None)
    edges2 = cv2.erode(edges2, None)

    
    edges1  = np.array(edges1)
    edges2  = np.array(edges2)
    
    cv2.imshow("edges1", edges1)
    cv2.imshow("edges2", edges2)

    # get the contours and their areas
    contour_info_1 = [(c, cv2.contourArea(c),) for c in cv2.findContours(edges1, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)[0]]
    contour_info_2 = [(c, cv2.contourArea(c),) for c in cv2.findContours(edges2, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)[0]]
   

    # Get the area of the image as a comparison
    image_area = source.shape[0] * source.shape[1]
        
    # calculate max and min areas in terms of pixels
    max_area = max_area * image_area
    min_area = min_area * image_area

    
    # Set up mask with a matrix of 0's
    mask1 = np.zeros(edges1.shape, dtype = np.uint8)
    # Go through and find relevant contours and apply to mask
    for i in range(0,len(contour_info_1)):   
        # Instead of worrying about all the smaller contours, if the area is smaller than the min, the loop will break
        contour1 = contour_info_1[i]
        if contour1[1] > min_area and contour1[1] < max_area:
            # Add contour to mask
            mask1 = cv2.fillConvexPoly(mask1, contour1[0], (255))
        
    
    # use dilate, erode, and blur to smooth out the mask
    mask = mask1
    mask = cv2.dilate(mask, None, iterations=dilate_iter)
    mask = cv2.erode(mask, None, iterations=erode_iter)
    mask = cv2.GaussianBlur(mask, (blur,blur), 0)
    mask = np.array(mask)
    # Ensures data types match up
    mask_color = np.array(mask_color)
    mask_color = np.reshape(mask_color,[1,3])
    mask = mask.astype('float32') / 255.0           
    source= source.astype('float32') / 255.0
    # Blend the image and the mask
    masked = (mask * source)
    masked = (masked * 255).astype('uint8')

    return masked




while(True):
    
    # Get Region of interest
    x,y,w,h = cv2.selectROI(image1)
    # Recommended values for the crop
    # X: 145 , Y: 292 , W: 1035 , H: 445 
   
    # Crop image and use same crop for template
    imageCrop1 = image1[int(y):int(y+h), int(x):int(x+w)]
    
    print(x,y,w,h)
    
    imageCrop2 = image2[int(y):int(y+h), int(x):int(x+w)]
    
    # Display the resulting frame
    
    cv2.imshow("Foreground Canny ",bgRemoval_seg(imageCrop1, imageCrop2))

    if cv2.waitKey(1) & 0xFF == ord('q'):
        break

# When everything done, release the capture 

cv2.destroyAllWindows()

Original image:original image

After filters (First is original, second is template,how it needs to look): Original: bilateral filters and adaptive contrast filter

Template: template bilateral filters and adaptive contrast filter

Canny Operation: Original: a lot of noise caused by shadows but also background noise Template (the ultimate goal):template canny operation

Object extraction from background: end result

I hope you can help me!

I already tried different filters. Furthermore, I tried Grabcut algorithm. I also did some thresholding but stopped early, not dived in to that. I also tried division with the Gaussian filters, but the result maintained the same.

  • I don't see a chance to get this running reliably, not with that busy of a background. Some of it you could get rid of filtering by color / saturation, but there's grey metallic there too, exactly the same as the wire. And the task is to find damaged wires, supposedly ? Because those will not be guaranteed to have the same kind of edges as your training image, invalidating the whole edge search/filter routine. Not doable with classic image processing would be my vote. Throw it into the neural network and hope for the best, sorry. – nick Nov 22 '22 at 11:36
  • 1
    You likely need AI/Deep Learning with training. I get a good looking result from using http://remove.bg using your image – fmw42 Nov 22 '22 at 17:48

1 Answers1

1

As noted in the comments, this is a tricky problem, because the image has lots of edges that you don't care about, and it's hard to filter by color, either. However, there is one feature which I think could be helpful, which is the blur. Specifically, the wire is focus, and the rest of the shot is not.

You could exploit this fact using a Laplacian filter. A Laplacian filter is usually used to detect edges by looking at where the filter crosses zero. However, it also can be used to detect blur by finding regions where the values of the filter are small across a wide area. To get the entire wire, I use a Gaussian smoothing filter after the Laplacian filter, which smears the high values across the width of the wire. Then, the value is thresholded.

import cv2
import matplotlib.pyplot as plt
import numpy as np
import scipy.ndimage

image = cv2.imread('test192_img.png')

laplacian_spread_distance = 15  # distance to spread laplacian in pixels
wire_threshold = 110  # Out of 255. Higher values mean less of the image is kept.

gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
fm = np.abs(cv2.Laplacian(gray, cv2.CV_64F))
fm = scipy.ndimage.gaussian_filter(fm, sigma=lp_spread_distance)
fm /= fm.max() / 255
fm = fm.astype('uint8')
ret2, thresholded = cv2.threshold(fm, wire_threshold, 1, cv2.THRESH_BINARY)
extracted = thresholded.reshape(image.shape[:2] + (1,)) * image

Output from this filter:

wire extract

This method assumes that the wire is in focus. That assumption might not be justified if you have an auto-focus camera, or if the distance between the camera and wire is changing.

Nick ODell
  • 15,465
  • 3
  • 32
  • 66