1

I am using a Raspberry Pi and it's Camera to perform some Image Processing algorithm. So , I am performing a background subtraction on successive frames of the captured stream and trying to find if there is any object present in the image and if yes, print out it's area . The algorithm works fine as expected but there is a problem .

The thresholding function which uses cv2.THRESH_OTSU , results into a grainy image whenever there is no object present , i.e the background and the foreground images are same . However those noises/grain disappear when there is an object present in the foreground image . These are as follows -

  1. Same Background Image and Foreground Image with noise
  2. Different Background and Foreground Image without any noise

As you can see ,if the images are almost same , the noise is present and if any object is introduced in the frame , then the noise vanishes .

I have tried the following to remove the noise but it didn't work .

  1. Tried using only cv2.THRESH_BINARY / cv2.THRESH_BINARY_INV without Otsu binariszation.

  2. I have tried increasing the brightness/contrast/saturation of the captured image to see if the performance varies , but no change .

  3. I have tried to increase/decrease the amount of erosion/dilation preceding the Thresholding step , but this did not make any change either .

This is my code -

from time import sleep
from picamera import PiCamera
from picamera.array import PiRGBArray
import cv2,os
import numpy as np
import threading


def imageSubtract(img):
    bilateral_filtered_image = cv2.bilateralFilter(img, 9, 170, 170)
    bilateral_filtered_image = cv2.cvtColor(bilateral_filtered_image,cv2.COLOR_BGR2GRAY)
    return bilateral_filtered_image

def  imageProcessing():
    camera = PiCamera()
    camera.resolution = (512,512)
    camera.awb_mode="fluorescent"
    camera.iso = 800
    camera.contrast=33
    camera.brightness=75
    camera.sharpness=100
    rawCapture = PiRGBArray(camera, size=(512, 512))

    first_time=0
    frame_buffer=0
    counter=0
    camera.start_preview()
    sleep(2)


    for frame in camera.capture_continuous(rawCapture, format="bgr", use_video_port=True):

        if first_time==0:
            rawCapture.truncate(0)
            if frame_buffer<10:
               print("Frame rejected -",str(frame_buffer))
               frame_buffer+=1
               continue
            os.system("clear")
            refImg=frame.array
            refThresh=imageSubtract(refImg)
            first_time=1



        image = frame.array
        cv2.imshow("Foreground", image)
        key = cv2.waitKey(1)
        rawCapture.truncate(0)
        newThresh=imageSubtract(image)

        diff=cv2.absdiff(refThresh,newThresh)
        kernel = np.ones((5,5),np.uint8)

        diff=cv2.dilate(diff,kernel,iterations = 3)
        cv2.imshow("Background",refImg)
        _, thresholded = cv2.threshold(diff, 0 , 255, cv2.THRESH_BINARY +cv2.THRESH_OTSU)


        _, contours, _= cv2.findContours(thresholded,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
        try:
           c=max(contours,key=cv2.contourArea)
           x,y,w,h = cv2.boundingRect(c)
           cv2.rectangle(thresholded,(x,y),(x+w,y+h),(125,125,125),2)
           if cv2.contourArea(c)>500:
              print("Object detected with area = ",cv2.contourArea(c))
           cv2.imshow("Threshold",thresholded)
           if key == ord('q'):
               camera.close()
               cv2.destroyAllWindows()
               break
        except Exception as e:
           pass

if __name__ == "__main__" :
   imageProcessing()

Please help me to remove the noise when the background and foreground Images are same .

Thank You !

Boudhayan Dev
  • 970
  • 4
  • 14
  • 42
  • A relatively easy, but perhaps not robust way would be to compute the relative sum of the `cv2.absdiff()` result, if this is below a certain threshold then you could consider the image 'clean', e.g. no object in it – Jyr Feb 13 '18 at 22:18
  • Otsu is a global segmentation method and calculates the threshold based on the image histogram. So it performs bad for local changes. Maybe you try to perform a dynamic threshold. But it seems strange, that your result image from the same background and foreground image is not null. Is it really the same image or just the same scene with different illumination etc.? – PSchn Feb 13 '18 at 22:21
  • @PSchn Maybe the illumination has changed a bit as the camera is 5MP and the successive frame arrays have slight differences in intensity . But even after that when the object is introduced , the otsu binarization gives clear image with just the objects contour . How ? – Boudhayan Dev Feb 14 '18 at 04:11
  • @Jyr I'll try that out . Meanwhile, I got a temporary fix by calculating the number of contours found in the image . Say, it is more than 100 (when both images) are same , then the frame is clean , and if contours are less than say 10(object present) , then the frame most probably has a object. – Boudhayan Dev Feb 14 '18 at 04:13
  • Look at the image histogram. The otsu algorithm calculates the best threshold to seperate for- and background. When your subtraction image contains an object, the algorithm calculates the threshold just fine. With no object you cannot seperate for and background, so you get a bad threshold i guess. – PSchn Feb 14 '18 at 09:18
  • P.S.: if you use an subtraction image you should update your reference image ebery time your camera settings , surrounding light etc. have changed. – PSchn Feb 14 '18 at 09:37
  • @PSchn Thank you . You are right about the algorithm not able to distinguish the 2 images when they are almost equal . I would like to know , is there any way to trigger OTSU binarization only when needed for ex - when object is present , and turn off OTSU when both the images are more or less same . I have tried calculating the number of contours from the subtracted image and based on that number I have decided to trigger OTSU on/off . However it is not a very efficient solution and moreover it does not work well with differing lighting conditions . – Boudhayan Dev Feb 14 '18 at 15:26
  • There are a huge number of methods for illumination invariance that you can play around with, e.g. try histogram equalization. Personally I'd just check whether the pixel change is within an 'allowed' range. It would be best to convert the image to HSV space to do that though, as the B, G and R channels are all correlated with lighting. – Jyr Feb 15 '18 at 01:48

0 Answers0