0

So I am trying to plan the approach I want to take to count vehicles and pedestrians in a video. Here are my basic steps for the approach I want to take.

  1. Use background subtraction to distinguish between moving objects.
  2. Use cv2.SimpleBlobDetector to detect blobs from the mask generated in the BGS step and return the keypoints.
  3. Perform tracking of all blobs ( Not yet implemented in the example ) with the given keypoints.

The question: Can this approach be applied to both pedestrian and vehicles and if so, I am not clear on how can one distinguish the different blobs?

I am wondering if may be the size of the blob can be a used to distinguish between pedestrians ( small blobs ) and vehicles ( larger blobs ). However, I am not sure how to handle the case of a vehicle being further away from the source and hence appearing to be small.

import numpy as np
import cv2

cap = cv2.VideoCapture('video.avi')

kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE,(3,3))
fgbg = cv2.BackgroundSubtractorMOG(500, 6, 0.9, 1)

# Setup SimpleBlobDetector parameters.
params = cv2.SimpleBlobDetector_Params()

# Change thresholds
params.minThreshold = 10;
params.maxThreshold = 200;

# Filter by Area.
params.filterByArea = True
params.minArea = 400

# Filter by Circularity
params.filterByCircularity = True
params.minCircularity = 0.1

# Filter by Convexity
params.filterByConvexity = True
params.minConvexity = 0.87

# Filter by Inertia
params.filterByInertia = True
params.minInertiaRatio = 0.01

# Create a detector with the parameters
ver = (cv2.__version__).split('.')
if int(ver[0]) < 3 :
    detector = cv2.SimpleBlobDetector(params)
else : 
    detector = cv2.SimpleBlobDetector_create(params)

while(1):
    ret, frame = cap.read()

    fgmask = fgbg.apply(frame)
    fgmask = cv2.morphologyEx(fgmask, cv2.MORPH_OPEN, kernel)

    #fgmask = frame;

    # Detect blobs.
    keypoints = detector.detect(fgmask)
    # Draw detected blobs as red circles.
    # cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS ensures the size of the circle corresponds to the size of blob
    im_with_keypoints = cv2.drawKeypoints(frame, keypoints, np.array([]), (0,0,255), cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)

    print keypoints

    cv2.imshow('frame',im_with_keypoints)

    k = cv2.waitKey(30) & 0xff
    if k == 27:
        break

cap.release()
cv2.destroyAllWindows()
Georgi Angelov
  • 4,338
  • 12
  • 67
  • 96

1 Answers1

0

I would suggest not to take blob area approach as a way to distinguish pedestrians from vehicles. An obvious drawback has been already explained by you - more distant cars would be for sure taken as a pedestrians.

There is a necessity to involve more complicated logic between steps 2 and 3 e.g.:

  • person detector based on HOG - see http://lear.inrialpes.fr/people/triggs/pubs/Dalal-cvpr05.pd. Already implemented in OpenCV, empirecally to have good accuracy in person detection.
  • car detector / car features detector (wheels, plate number etc) based on Haar classifier - need to prepare your own with the tools opencv provided. Having good classifier for detecting some car feature, you will increase accuracy of overall detection by increasing number of car detected positives.

Having at least one of those bullets in your final solution would be a must-have for me to have a good accuracy solution. Having them both will even have accuracy better.

marol
  • 4,034
  • 3
  • 22
  • 31
  • Marol, would you advise not using the blob detection at all ? Could the HOG detector and Haar classifier be used after the blob detection? I would assume that there will benefit to using Blob detection + HOG detection or Haar classifier. – Georgi Angelov Jul 19 '15 at 02:17
  • What I suggested was to determine whether detected blob is a car or pedestrian. Therefore I think you can combine blob detection and additional classifiers not to mix cars with pedestrians. – marol Jul 19 '15 at 17:08
  • It makes sense. Thanks! – Georgi Angelov Jul 20 '15 at 04:17