I'm using OpenCV's Cascade Classifier in order to detect faces. I followed the webcam tutorial, and I was able to use detectMultiScale to find and track my face while it was streaming video from my laptop's webcam.
But when I take a photo of myself from my laptop's webcam, I load that image into OpenCV, and apply detectMultiScale on that image, and for some reason, the Cascade Classifier can't detect any faces on that static image!
That static image would definitely have been detected if it was one frame in from my webcam stream, but when I just take that one individual image alone, nothing's being detected.
Here's the code I use (just picked out the relevant lines):
Code in Common:
String face_cascade_name = "/path/to/data/haarcascades/haarcascade_frontalface_alt.xml";
CascadeClassifier face_cascade;
Mat imagePreprocessing(Mat frame) {
Mat processed_frame;
cvtColor( frame, processed_frame, COLOR_BGR2GRAY );
equalizeHist( processed_frame, processed_frame );
return processed_frame;
}
For Web-cam streaming face detection:
int detectThroughWebCam() {
VideoCapture capture;
Mat frame;
if( !face_cascade.load( face_cascade_name ) ){ printf("--(!)Error loading face cascade\n"); return -1; };
//-- 2. Read the video stream
capture.open( -1 );
if ( ! capture.isOpened() ) { printf("--(!)Error opening video capture\n"); return -1; }
while ( capture.read(frame) )
{
if(frame.empty()) {
printf(" --(!) No captured frame -- Break!");
break;
}
//-- 3. Apply the classifier to the frame
Mat processed_image = imagePreprocessing( frame);
vector<Rect> faces;
face_cascade.detectMultiScale( processed_frame, faces, 1.1, 2, 0|CV_HAAR_SCALE_IMAGE|CV_HAAR_FIND_BIGGEST_OBJECT, Size(30, 30) );
if (faces.size() > 0) cout << "SUCCESS" << endl;
int c = waitKey(10);
if( (char)c == 27 ) { break; } // escape
}
return 0;
}
For my static image face detection:
void staticFaceDetection() {
Mat image = imread("path/to/jpg/image");
Mat processed_frame = imagePreprocessing(image);
std::vector<Rect> faces;
//-- Detect faces
face_cascade.detectMultiScale( processed_frame, faces, 1.1, 2, 0|CV_HAAR_SCALE_IMAGE|CV_HAAR_FIND_BIGGEST_OBJECT, Size(30, 30) );
if (faces.size() > 0) cout << "SUCCESS" << endl;
}
In my eyes, both of these processes are identical (the only difference being the where I'm acquiring the original image), but the video stream version regularly detects faces, while the static method never seems to be able to find a face.
Am I missing something here?