1

I'm trying to detect differences between two frames (1min apart) of a CCTV camera to detect if a human entered the frame. Due to some technical limitations, I can't use YOLO or other human detection methods, and the default OpenCV HOG is just not good enough.

So I came up with the idea to take two frames a minute apart and check the changes between them. If there is too much of a change, there is some activity happening, most likely caused by humans.

I used code I found here on Stackoverflow:

from skimage.metrics import structural_similarity, variation_of_information
import cv2
import numpy as np

before = cv2.imread('1.png')
after = cv2.imread('2.png')
before_gray = cv2.cvtColor(before, cv2.COLOR_BGR2GRAY)
after_gray = cv2.cvtColor(after, cv2.COLOR_BGR2GRAY)
(score, diff) = structural_similarity(before_gray, after_gray, full=True)

diff = (diff * 255).astype("uint8")
diff_box = cv2.merge([diff, diff, diff])

thresh = cv2.threshold(diff, 0, 255, cv2.THRESH_BINARY_INV | cv2.THRESH_OTSU)[1]
contours = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
contours = contours[0] if len(contours) == 2 else contours[1]

mask = np.zeros(before.shape, dtype='uint8')
filled_after = after.copy()

for c in contours:
    area = cv2.contourArea(c)
    x,y,w,h = cv2.boundingRect(c)
    if area > 5000 and h>w: #Assuming humans are a big target and longer in height than width
        cv2.rectangle(before, (x, y), (x + w, y + h), (36,255,12), 2)
        cv2.rectangle(after, (x, y), (x + w, y + h), (36,255,12), 2)
        cv2.rectangle(diff_box, (x, y), (x + w, y + h), (36,255,12), 2)
        cv2.drawContours(mask, [c], 0, (255,255,255), -1)
        cv2.drawContours(filled_after, [c], 0, (0,255,0), -1)

It works ok, but it seems to be too sensitive. For example, those two pictures do show human activity and it works perfect:

Two images, detecting human successfully

But between those two images, where there is no movement or any human, it detects so many changes, probably due to light (it is only a minute apart):

Two images, no change but still detecting too much difference

Is there a way around this? I think I need to use cv2.adaptiveThreshold because light conditions differ, but not sure how. I changed threshold to: thresh = cv2.adaptiveThreshold(diff, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY_INV, 21, 4)

And it works slightly better, but still gives a lot of noise error.

boozi
  • 468
  • 1
  • 3
  • 16

0 Answers0