19

This seems like a simple task, yet it is driving me nuts. Is it possible to convert a UIView containing AVCaptureVideoPreviewLayer as a sublayer into an image to be saved? I want to create an augmented reality overlay and have a button save the picture to the camera roll. Holding the power button + home key captures the screenshot to the camera roll, meaning that all of my capture logic is working, AND the task is possible. But I cannot seem to be able to make it work programmatically.

I'm capturing a live preview of the camera's image using AVCaptureVideoPreviewLayer . All of my attempts to render the image fail:

  previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:captureSession];
//start the session, etc...


//this saves a white screen
- (IBAction)saveOverlay:(id)sender {
    NSLog(@"saveOverlay");

    UIGraphicsBeginImageContext(appDelegate.window.bounds.size);
        UIGraphicsBeginImageContext(scrollView.frame.size);

    [previewLayer.presentationLayer renderInContext:UIGraphicsGetCurrentContext()];


//    [appDelegate.window.layer renderInContext:UIGraphicsGetCurrentContext()];

    UIImage *screenshot = UIGraphicsGetImageFromCurrentImageContext();
    UIGraphicsEndImageContext();

    UIImageWriteToSavedPhotosAlbum(screenshot, self, 
                                   @selector(image:didFinishSavingWithError:contextInfo:), nil);
}

//this renders everything, EXCEPT for the preview layer, which is blank.

[appDelegate.window.layer renderInContext:UIGraphicsGetCurrentContext()];

I've read somewhere that this may be due to security issues of the iPhone. Is this true?

Just to be clear: I don't want to save the image for the camera. I want to save the transparent preview layer superimposed over another image, thus creating transparency. Yet for some reason I cannot make it work.

Alex Stone
  • 46,408
  • 55
  • 231
  • 407

2 Answers2

17

I like @Roma's suggestion of using GPU Image - great idea. . . . however if you want a pure CocoaTouch approach, here's what to do:

Implement AVCaptureVideoDataOutputSampleBufferDelegate

- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
    // Create a UIImage+Orientation from the sample buffer data
    if (_captureFrame)
    {
        [captureSession stopRunning];

        _captureFrame = NO;
        UIImage *image = [ImageTools imageFromSampleBuffer:sampleBuffer];
        image = [image rotate:UIImageOrientationRight];

        _frameCaptured = YES;

        if (delegate != nil)
        {
            [delegate cameraPictureTaken:image];
        }
    }
}

Capture as Follows:

+ (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer 
{
    // Get a CMSampleBuffer's Core Video image buffer for the media data
    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); 
    // Lock the base address of the pixel buffer
    CVPixelBufferLockBaseAddress(imageBuffer, 0); 

    // Get the number of bytes per row for the pixel buffer
    void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer); 

    // Get the number of bytes per row for the pixel buffer
    size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer); 
    // Get the pixel buffer width and height
    size_t width = CVPixelBufferGetWidth(imageBuffer); 
    size_t height = CVPixelBufferGetHeight(imageBuffer); 

    // Create a device-dependent RGB color space
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); 

    // Create a bitmap graphics context with the sample buffer data
    CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8, 
                                             bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst); 
    // Create a Quartz image from the pixel data in the bitmap graphics context
    CGImageRef quartzImage = CGBitmapContextCreateImage(context); 
    // Unlock the pixel buffer
    CVPixelBufferUnlockBaseAddress(imageBuffer,0);

    // Free up the context and color space
    CGContextRelease(context); 
    CGColorSpaceRelease(colorSpace);

    // Create an image object from the Quartz image
    UIImage *image = [UIImage imageWithCGImage:quartzImage];

    // Release the Quartz image
    CGImageRelease(quartzImage);

    return (image);
}

Blend the UIImage with the overlay

  • Now that you have the UIImage, add it to a new UIView.
  • Add the overlay on top as a sub-view.

Capture the new UIView

+ (UIImage*)imageWithView:(UIView*)view
{
    UIGraphicsBeginImageContextWithOptions(view.bounds.size, view.opaque, [UIScreen    mainScreen].scale);
    [view.layer renderInContext:UIGraphicsGetCurrentContext()];
    UIImage* img = UIGraphicsGetImageFromCurrentImageContext();
    UIGraphicsEndImageContext();
    return img;
}
Jasper Blues
  • 28,258
  • 22
  • 102
  • 185
  • I think this is what I am looking for. Will try to get this running and let you know how it goes. Thanks for the quick answer! – Nitin Alabur Jun 09 '13 at 14:55
  • are you using an uiimage extension for image = [image rotate:UIImageOrientationRight]; ? – Nitin Alabur Jun 12 '13 at 23:57
  • Yes, that step is optional. However, let me give you the code. – Jasper Blues Jun 13 '13 at 00:13
  • Here's the rotate utility: https://github.com/cyclestreets/ios/blob/master/lib/utilities/WBImage.h – Jasper Blues Jun 13 '13 at 00:17
  • Jasper, thank you very much, your answer and comments were very helpful in getting it working! – Nitin Alabur Jun 15 '13 at 14:21
  • If you get errors about the context not being created like I did, make sure you set your `AVVideoDataCaptureOutput.videoSettings`, like this: `videoOutput.videoSettings = @{ (NSString *)kCVPixelBufferPixelFormatTypeKey : @(kCVPixelFormatType_32BGRA) };` – xaphod Sep 27 '17 at 01:57
4

I can advise you to try GPU Image.

https://github.com/BradLarson/GPUImage

It uses openGL, so it's rather fast. It can process pictures from camera and add filters to them (there are lot of them) including edge detection, motion detection and a far more

It's like OpenCV but based on my own experience GPU image is easier to connect with your project and the language is objective-c.

Problem could appear if you decided to use box2d for physics - is uses openGl too and you will need to spent some time till this 2 frameworks will stop fighting))

Roma
  • 1,107
  • 9
  • 19
  • I guess what the OP and I are trying, is to get a live video feed using avcapturevideopreviewlayer and are trying to add a scaled image. I am trying to add a scaled image on top of the live video feed based on face detection, and the screenshot saved is only that of the overlay and not avcapturevideopreviewlayer. – Nitin Alabur Jun 09 '13 at 14:02
  • In GPU image there yore adding filters on layers so I'm pretty sure there is way to save only current layer(there is method in GPU Image to make UIImage from frame) If You want to get just what is on the display - there is way to make screenshot programmatically.I can't remember it now but i've seen it on stackoverflow) – Roma Jun 09 '13 at 14:14
  • Roma, thank you very much for your answer. Although I didn't use GPUImage, I'll certainly be using it soon. Thanks for posting it as an alternative option. – Nitin Alabur Jun 15 '13 at 14:23