13

I'm using CoreImage to detect faces on pictures. It works great on the simulator, but on my iphone 5, it almost never works with pictures taken with the iphone's camera ( it works with pictures picked on the web ).

The following code shows how I detect the faces. For every pictures, the application logs

step 1 : image will be processed

But it only logs

step 2 : face detected

for few of them, whereas almost every faces are detected on the simulator or if I use pictures from the web.

var context: CIContext = {
            return CIContext(options: nil)
            }()
        let detector = CIDetector(ofType: CIDetectorTypeFace,
            context: context,
            options: [CIDetectorAccuracy: CIDetectorAccuracyHigh])

        let imageView = mainPic

        for var index = 0; index < picsArray.count; index++ {

            if !(picsArray.objectAtIndex(index).objectAtIndex(1) as! Bool) {

                var wholeImageData: AnyObject = picsArray.objectAtIndex(index)[0]

                if wholeImageData.isKindOfClass(NSData) {

                    let wholeImage: UIImage = UIImage(data: wholeImageData as! NSData)!
                    if wholeImage.isKindOfClass(UIImage) {

                        NSLog("step 1 : image will be processed")

                        let processedImage = wholeImage
                        let inputImage = CIImage(image: processedImage)
                        var faceFeatures: [CIFaceFeature]!
                        if let orientation: AnyObject = inputImage.properties()?[kCGImagePropertyOrientation] {
                            faceFeatures = detector.featuresInImage(inputImage, options: [CIDetectorImageOrientation: orientation]) as! [CIFaceFeature]
                        } else {
                            faceFeatures = detector.featuresInImage(inputImage) as! [CIFaceFeature]
                        }

                        let inputImageSize = inputImage.extent().size
                        var transform = CGAffineTransformIdentity
                        transform = CGAffineTransformScale(transform, 1, -1)
                        transform = CGAffineTransformTranslate(transform, 0, -inputImageSize.height)

                        for faceFeature in faceFeatures {

                            NSLog("step 2 : face detected")
                            // ...

I've been looking for a solution for three hours now, and I'm quite desperate :).

Any suggestion would be really appreciated !

Thanks in advance.

Randy
  • 4,335
  • 3
  • 30
  • 64
  • are the images the same resolution? I'm not sure which algorithm Apple uses, but multi scale detection can be a problem with some systems. have you tried pulling the images in which the phone doesn't detect faces, putting it in the simulator and then seeing if it works? – ABC Oct 29 '15 at 15:55
  • Yes, it always works on the simulator, even with pictures taken from the iPhone's camera – Randy Oct 29 '15 at 16:06
  • so the variable faceFeatures is returning an empty vector basically when you run on the phone? – ABC Oct 29 '15 at 16:09
  • Yes exactly, if I use pictures from the iPhone's camera ( it works with pictures found on the web ) – Randy Oct 29 '15 at 16:15
  • ok dumb question- are the simulator and your phone running the same target os? – ABC Oct 29 '15 at 16:19
  • Yes :). I actually had the same problem on iOS 8 and I still got it on iOS 9 – Randy Oct 29 '15 at 16:20
  • It's interesting pointing out the fact that it sometimes works ( really really rarely ) when the face is well exposed to light and when the face is perfectly in front of the camera ( no angle ) – Randy Oct 29 '15 at 16:23
  • I had a few guesses of where to start. It's possible that the conversion to coreimage is for some reason different on the simulator and your phone. Now why that would happen if they're running the exact same os doesn't make much sense. Another possibility is that the algorithm for face detection is somehow different when it's running on the two systems. What you're describing with the face on and lighted would imply that its using haar cascade filters which can be a bit finicky. But this also seems fishy, which makes me again think the images are somehow different on the 2 systems. – ABC Oct 29 '15 at 16:36
  • Something I have totally forgotten to mention and I'm terribly sorry that because it is important is : I encountered the exact same issue using opencv framework – Randy Oct 29 '15 at 16:48
  • Hm that is interesting then- and rules out the uiimage to coreimage conversion. It could still be the UIImage or else just somethign in the images. Any chance you could show the images that are failing? – ABC Oct 29 '15 at 17:31
  • 1
    No sorry, but I just noticed something else : If I put the same pictures in my images.assets ( directly into my project ), then the faces are detected. So I think there's something happening when the pictures are picked from the photos gallery ( using UIImagePickerController ) – Randy Oct 30 '15 at 13:52

1 Answers1

4

I found a really weird way to solve my problem.

By setting the allowsEditing property of UIImagePickerController() to true when picking my pictures, everything works fine... I can't understand why, but it works.

Randy
  • 4,335
  • 3
  • 30
  • 64