8

I am trying to write a routine that takes a UIImage and returns a new UIImage that contains just the face. This would seem to be very straightforward, but my brain is having problems getting around the CoreImage vs. UIImage spaces.

Here's the basics:

- (UIImage *)imageFromImage:(UIImage *)image inRect:(CGRect)rect {
    CGImageRef sourceImageRef = [image CGImage];
    CGImageRef newImageRef = CGImageCreateWithImageInRect(sourceImageRef, rect);
    UIImage *newImage = [UIImage imageWithCGImage:newImageRef];
    CGImageRelease(newImageRef);
    return newImage;
}


-(UIImage *)getFaceImage:(UIImage *)picture {
  CIDetector  *detector = [CIDetector detectorOfType:CIDetectorTypeFace 
                                             context:nil 
                                             options:[NSDictionary dictionaryWithObject: CIDetectorAccuracyHigh forKey: CIDetectorAccuracy]];

  CIImage *ciImage = [CIImage imageWithCGImage: [picture CGImage]];
  NSArray *features = [detector featuresInImage:ciImage];

  // For simplicity, I'm grabbing the first one in this code sample,
  // and we can all pretend that the photo has one face for sure. :-)
  CIFaceFeature *faceFeature = [features objectAtIndex:0];

  return imageFromImage:picture inRect:faceFeature.bounds;
}

The image that is returned is from the flipped image. I've tried adjusting faceFeature.bounds using something like this:

CGAffineTransform t = CGAffineTransformMakeScale(1.0f,-1.0f);
CGRect newRect = CGRectApplyAffineTransform(faceFeature.bounds,t);

... but that gives me results outside the image.

I'm sure there's something simple to fix this, but short of calculating the bottom-down and then creating a new rect using that as the X, is there a "proper" way to do this?

Thanks!

Tim Sullivan
  • 16,808
  • 11
  • 74
  • 120

3 Answers3

5

It's much easier and less messy to just use CIContext to crop your face from the image. Something like this:

CGImageRef cgImage = [_ciContext createCGImage:[CIImage imageWithCGImage:inputImage.CGImage] fromRect:faceFeature.bounds];
UIImage *croppedFace = [UIImage imageWithCGImage:cgImage];

Where inputImage is your UIImage object and faceFeature object is of type CIFaceFeature that you get from [CIDetector featuresInImage:] method.

jlajlar
  • 1,118
  • 11
  • 9
3

Since there doesn't seem to be a simple way to do this, I just wrote some code to do it:

CGRect newBounds = CGRectMake(faceFeature.bounds.origin.x, 
                              _picture.size.height - faceFeature.bounds.origin.y - largestFace.bounds.size.height, 
                              faceFeature.bounds.size.width, 
                              faceFeature.bounds.size.height);

This worked a charm.

Tim Sullivan
  • 16,808
  • 11
  • 74
  • 120
  • 5
    I'm looking into the same thing, could you explain the calculation you performed for the y axis please?. And what is 'largestface'? – Rory Lester Aug 20 '12 at 13:52
0

There is no simple way to achieve this, the problem is that the images from the iPhone camera are always in portrait mode, and metadata settings are used to get them to display correctly. You will also get better accuracy in your face detection call if you tell it the rotation of the image beforehand. Just to make things complicated, you have to pass it the image orientation in EXIF format.

Fortunately there is an apple sample project that covers all of this called Squarecam, i suggest you check it for details

Tark
  • 5,153
  • 3
  • 24
  • 23
  • Yeah, I'm already accommodating for the rotation of the image. My problem is related to the different origin for UIImage and the CG routines. – Tim Sullivan Feb 23 '12 at 22:50