3

I am trying to do a car detector from UAV images with Python 2.7 and OpenCV 2.4.13. The goal is to detect cars from an upper view in any direction, in urban environments. I am facing time execution and accuracy problems.

Detector works fine when I use it with some cascades that I obtained from internet:

  • Banana classifier (obviously it does not detect cars, but detect the objects that it recognises as bananas): ( coding-robin.de/2013/07/22/train-your-own-opencv-haar-classifier.html)
  • Face detection cascades from OpenCV (same behaviour as banana classifier)

For the detection itself, I´m using detectMultiScale() with scaleFactor = 1.1-1.2 and minNeighbors=3

The detection is performed in a reasonable time (a few seconds) in a 4000x3000 px image.

The problems arise when I try to use my own trained classifiers. The results are bad and it takes very long to perform the detection (more than half an hour)

For training, I extracted both positives and negatives images from a big orthomosaic (that I downscaled a few times) that has a parking lot with lots of cars. I´ve extracted a total of 50 cars (25x55 pixels), which then I reflected horizontally, resulting in 100 positive images, and 2119 negative images (60x60 pixels) from the same orthomosaic. I call this set the "complete set" of images. From that set, I created a subset (4 positives and 35 negatives), which I call the "Dummy set":

Positive image example 1

Negative image example 1

For training, I used opencv_createsamples and opencv_traincascade. I created 6000 samples from the 100 positive images, rotating the cars from 0 to 360 degrees:

perl bin/createsamples.pl positives.txt negatives.txt samples 6000 "opencv_createsamples -bgcolor 0 -bgthresh 0 -maxxangle 0.5 -maxyangle 0.5 -maxzangle 6.28 -maxidev 40 -w 60 -h 60"

So now, I have 6000 60x60 pixels sample images of cars in any direction over random backgrounds.

Then I executed mergevec.py to create the samples.vec file, and run the training application opencv_traincascade:

python mergevec.py -v samples/ -o samples.vec
opencv_traincascade -data classifier -vec samples.vec -bg negatives.txt -numStages 20 -minHitRate 0.999 -maxFalseAlarmRate 0.5 -numPos 3700 -numNeg 2119 -w 60 -h 60 -mode ALL -precalcValBufSize 3096 -precalcIdxBufSize 3096

With this method, I have trained four classifiers, two using the complete set, and the other two using the dummy set, one LBP and one HAAR for each set. The results I get are the following:

  1. Dummy set, LBP: Training stopped in 1 Stage. Fast detection, no objects detected
  2. Dummy set, HAAR: Training stopped in 1 Stage. Detection takes forever (or at least more than half an hour). I interrupted the process because it is obviously not working.
  3. Complete set, LBP: Training stopped in 6 Stages. Very slow detection (1-2 minutes in a 500x400 pixels image, using scaleFactor = 2). Detect a very few amount of objects (2), none of them cars, when there are at least 10 cars in the image, and also the same image used for training.
  4. Complete set, HAAR: I stopped training in 4th stage to test it. Same behaviour as with the Dummy set.

What I'm doing wrong? Since the banana and the face cascades works in a reasonable time and detects objects, the problem is obviously in my cascades, but I cannot figure it out why.

I really appreciate your help. Thanks in advance, Federico

Federico
  • 31
  • 2
  • Update: I have trained an intermediate set using HAAR features. It works very slowly, and it does not detect anything but an object in the center of the image, regardless of the image I use. Apparently the cascade did not learn any feature :( – Federico Sep 05 '16 at 15:51

1 Answers1

0

I can't say exactly, but I have an idea why you can't train HAAR (LBP) cascades to detect arbitrary oriented cars.

These cascades work fine when detected objects with approximately the same shape and color (lightness). A frontal oriented face is a good example of these objects. But it works much worse when face has another orientation and color (it's not joke, standard haar cascades from OpenCV have problem with detection of a man with dark skin). Although these problems are the result of training set, which contains only the faces of frontal oriented european people. But if we try to add to training set faces with all colors and spatial orientations we face with the same problems as you.

During the training process the training algorithm at each stage try to find the set of features (HAAR or LBP) which separates negative and positive samples. If detected object has complicated and variable shape the number of required features is very large. The big number of required features leads to that cascade classifier works very slowly or at all can't train.

So that HAAR (LBP) cascades can't be used for detection of objects with variable shape. But you can look towards Deep Convolution Neural Networks. They can solve these problems as I know.

ErmIg
  • 3,980
  • 1
  • 27
  • 40
  • Hi Ermlg. I finally figured it out to train a cascade that works. It appears that the window training size has to be smaller than the size of the negative images. I think that this is because of the opencv training algorithm, that overlap positive images into the negatives for raising robustness, but i am not sure. For now, it only works for cars in vertical position, but now I am trying to train the cascade with rotated versions of the positive samples. I will post with the updates. Thank you very much. – Federico Sep 08 '16 at 17:58