0

my first time over here, but I really searched about it and sadly I haven't found help.

I have an algorithm of facial recognition... Currently, I am trying to improve the efficiency of it.

After some study, I concluded that the dlib.get_frontal_face_detector () was the function slowing down my code.

Then, my approach was to remove the background of my figures and extract only the differences between the two images. After it, I have pieces of image, cropped from the full and original one, and this pieces with only the differences are much smaller (e.g. Full Image: 1520 x 2592 pixels and Cropped Image With Face: 150x200 pixels).

OBS: Camera of detection it is very far of the people being recognized, because of this the faces are tiny in the images and much of this size are useless, because of it I decided to remove the useless parts.

BUT, here things got crazy: When I apply the tiny, smaller, cropped face to the dlib.get_frontal_face_detector () it SOMETIMES doesn't recognizes it! (there are times where it recognizes, it's weird!). It sounds weird because when I apply the Full Image to the face_detector, yet with same resolution, only with larger shape, it recognizes the same face!

I think I am missing some theoretical information here...

detector = dlib.get_frontal_face_detector()

new_image = []
im2, contours, hierarchy  = cv2.findContours(im_bw,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)
for contour in contours:
    [x, y, w, h] = cv2.boundingRect(contour)
    #print(x, y, w, h)
    if h<20 or w<20:
        continue
    else:
        new_image.append(img2_n[y*inv_coef:y*inv_coef+h*inv_coef, x*inv_coef:x*inv_coef+w*inv_coef])

for images in new_image:
    rgb_img = cv2.cvtColor(images, cv2.COLOR_BGR2RGB)
    dets = detector(rgb_img, 1)

Above the extractor of smaller images and using it on the detector. Below, the simple code where I apply the full image to the same detector.

full_img = cv2.cvtColor(img2, cv2.COLOR_BGR2RGB)
dets = detector(full_img, 1)

Anyone that understands a little bit more of DLIB and OPENCV and PYTHON could help me?

OBS¹: The images that are not being recognized are larger than the threshold defined in the for to trash away too small pieces of image.

Nilson T. P.
  • 11
  • 1
  • 3
  • Apparently, DLib's default HOG based frontal face detector has a hard time detecting small faces or images containing only the face. Check out [this blog post](https://www.learnopencv.com/face-detection-opencv-dlib-and-deep-learning-c-python/) which compares different face detection models in OpenCV and DLib. – sgarizvi Dec 20 '18 at 09:38
  • 1
    Hi Nilson, check this blog post to get more knowledge about the face detector of Dlib and OpenCV https://www.pyimagesearch.com/2018/04/02/faster-facial-landmark-detector-with-dlib/ – Peshmerge Dec 20 '18 at 11:58
  • Thank you for the help! I already had read it, but I seriously missed this information LOL For test purpose I changed the upscale of detector(img, scale) in the smaller images where it hadn't detected the images, and then it worked! Sadly, it slowed down the code... Now I have to compare, if the efficiency gain still makes it applicable. – Nilson T. P. Dec 20 '18 at 12:04
  • @NilsonT.P.... I would suggest using the OpenCV's deep learning based face detector (Caffe trained model of SSD + ResNet-10). It is quite accurate and real-time even on CPU. The linked blog post also suggests that. – sgarizvi Dec 20 '18 at 14:05
  • Maybe you should resize the cropped images before use it on the detector, as in the https://www.pyimagesearch.com/2018/04/02/faster-facial-landmark-detector-with-dlib/, he used `imutils.resize(frame, width=400)` – Ha Bom Dec 29 '18 at 09:38

0 Answers0