2

I know this question is answered quite a bit but I have a different situation it seems. I'm trying to write a top level function where I can take a screenshot of my app at anytime, be it openGLES or UIKit, and I won't have access to the underlying classes to make any changes.

The code I've been trying works for UIKit, but returns a black screen for OpenGLES parts

CGSize imageSize = [[UIScreen mainScreen] bounds].size;
    if (NULL != UIGraphicsBeginImageContextWithOptions)
        UIGraphicsBeginImageContextWithOptions(imageSize, NO, 0);
    else
        UIGraphicsBeginImageContext(imageSize);

    CGContextRef context = UIGraphicsGetCurrentContext();

    // Iterate over every window from back to front
    for (UIWindow *window in [[UIApplication sharedApplication] windows])
    {
        if (![window respondsToSelector:@selector(screen)] || [window screen] == [UIScreen mainScreen])
        {
            // -renderInContext: renders in the coordinate space of the layer,
            // so we must first apply the layer's geometry to the graphics context
            CGContextSaveGState(context);
            // Center the context around the window's anchor point
            CGContextTranslateCTM(context, [window center].x, [window center].y);
            // Apply the window's transform about the anchor point
            CGContextConcatCTM(context, [window transform]);
            // Offset by the portion of the bounds left of and above the anchor point
            CGContextTranslateCTM(context,
                                  -[window bounds].size.width * [[window layer] anchorPoint].x,
                                  -[window bounds].size.height * [[window layer] anchorPoint].y);


            for (UIView *subview in window.subviews)
            {
                CAEAGLLayer *eaglLayer = (CAEAGLLayer *) subview.layer;
                if([eaglLayer respondsToSelector:@selector(drawableProperties)]) {
                    NSLog(@"reponds");
                    /*eaglLayer.drawableProperties = @{
                                                     kEAGLDrawablePropertyRetainedBacking: [NSNumber numberWithBool:YES],
                                                     kEAGLDrawablePropertyColorFormat: kEAGLColorFormatRGBA8
                                                     };*/
                    UIImageView *glImageView = [[UIImageView alloc] initWithImage:[self snapshotx:subview]];
                    glImageView.transform = CGAffineTransformMakeScale(1, -1);
                    [glImageView.layer renderInContext:context];

                    //CGImageRef iref = [self snapshot:subview withContext:context];
                    //CGContextDrawImage(context, CGRectMake(0.0, 0.0, 640, 960), iref);

                }

                [[window layer] renderInContext:context];

                // Restore the context
                CGContextRestoreGState(context);
            }
        }
    }

    // Retrieve the screenshot image
    UIImage *image = UIGraphicsGetImageFromCurrentImageContext();

    UIGraphicsEndImageContext();

    return image;

and

- (UIImage*)snapshotx:(UIView*)eaglview
{
    GLint backingWidth, backingHeight;

    //glBindRenderbufferOES(GL_RENDERBUFFER_OES, _colorRenderbuffer);
    //don't know how to access the renderbuffer if i can't directly access the below code

    // Get the size of the backing CAEAGLLayer
    glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &backingWidth);
    glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &backingHeight);

    NSInteger x = 0, y = 0, width = backingWidth, height = backingHeight;
    NSInteger dataLength = width * height * 4;
    GLubyte *data = (GLubyte*)malloc(dataLength * sizeof(GLubyte));

    // Read pixel data from the framebuffer
    glPixelStorei(GL_PACK_ALIGNMENT, 4);
    glReadPixels(x, y, width, height, GL_RGBA, GL_UNSIGNED_BYTE, data);

    // Create a CGImage with the pixel data
    // If your OpenGL ES content is opaque, use kCGImageAlphaNoneSkipLast to ignore the alpha channel
    // otherwise, use kCGImageAlphaPremultipliedLast
    CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, data, dataLength, NULL);
    CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
    CGImageRef iref = CGImageCreate (
                                     width,
                                     height,
                                     8,
                                     32,
                                     width * 4,
                                     colorspace,
                                     // Fix from Apple implementation
                                     // (was: kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast).
                                     kCGBitmapByteOrderDefault,
                                     ref,
                                     NULL,
                                     true,
                                     kCGRenderingIntentDefault
                                     );

    // OpenGL ES measures data in PIXELS
    // Create a graphics context with the target size measured in POINTS
    NSInteger widthInPoints, heightInPoints;
    if (NULL != UIGraphicsBeginImageContextWithOptions)
    {
        // On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration
        // Set the scale parameter to your OpenGL ES view's contentScaleFactor
        // so that you get a high-resolution snapshot when its value is greater than 1.0
        CGFloat scale = eaglview.contentScaleFactor;
        widthInPoints = width / scale;
        heightInPoints = height / scale;
        UIGraphicsBeginImageContextWithOptions(CGSizeMake(widthInPoints, heightInPoints), NO, scale);
    }
    else {
        // On iOS prior to 4, fall back to use UIGraphicsBeginImageContext
        widthInPoints = width;
        heightInPoints = height;
        UIGraphicsBeginImageContext(CGSizeMake(widthInPoints, heightInPoints));
    }

    CGContextRef cgcontext = UIGraphicsGetCurrentContext();

    // UIKit coordinate system is upside down to GL/Quartz coordinate system
    // Flip the CGImage by rendering it to the flipped bitmap context
    // The size of the destination area is measured in POINTS
    CGContextSetBlendMode(cgcontext, kCGBlendModeCopy);
    CGContextDrawImage(cgcontext, CGRectMake(0.0, 0.0, widthInPoints, heightInPoints), iref);

    // Retrieve the UIImage from the current context
    UIImage *image = UIGraphicsGetImageFromCurrentImageContext();

    UIGraphicsEndImageContext();

    // Clean up
    free(data);
    CFRelease(ref);
    CFRelease(colorspace);
    CGImageRelease(iref);

    return image;
}

Any advice on how to mix the two without having the ability to modify the classes in the rest of the application?

Thanks!

Brian L. Clark
  • 588
  • 5
  • 19

2 Answers2

1

I see what you tried to do there and it is not really a bad concept. There does seem to be one big problem though: You can not just call glReadPixels at any time you want. First of all you should make sure the buffer is full with data (pixels) you really need and second is it should be called on the same thread as its GL part is working on...

If the GL views are not yours you might have some big trouble calling that screenshot method, you need to call some method that will trigger binding its internal context and if it is animating you will have to know when the cycle is done to ensure that the pixels you receive are the same as the ones presented on the view.

Anyway if you get past all those you will still probably need to "jump" through different threads or need to wait for a cycle to finish. In that case I suggest you use blocks that return the screenshot image which should be passed as a method parameter so you can catch it whenever it is returned. That being said it would be best if you could override some methods on the GL views to be able to return you the screenshot image via callback block and write some recursive system.

To sum it up you need to anticipate multithreading, setting the context, binding the correct frame buffer, waiting for everything to be rendered. This all might result in impossibility to create a screenshot method that would simply work for any application, view, system without overloading some internal methods.

Note that you are simply not allowed to make a whole screenshot (like the one pressing home and lock button at the same time) in your application. As for the UIView part being so easy to create an image from it is because UIView is being redrawn into graphics context independently to the screen; as if you could take some GL pipeline and bind it to your own buffer and context and draw it, this would result in being able to get its screenshot independently and could be performed on any thread.

Matic Oblak
  • 16,318
  • 3
  • 24
  • 43
  • Okay that makes sense, If I were to have access to the OpenGLES views, should I have the snapshot call in there and is there a way I can put it at the right time where the buffer is full? Maybe glview:screenshot{[mysingleton glscreenshot]} ? – Brian L. Clark Jun 16 '13 at 18:46
0

Actually, I'm trying to do something similar: I'll post in full when I've ironed it out, but in brief:

  • use your superview's layer's renderInContext method
  • in the subviews which use openGL, implement the layer delegate's drawLayer:inContext: method
  • to render your view into the context, use a CVOpenGLESTextureCacheRef

Your superview's layer will call renderInContext: on each of it's sublayers - by implementing the delegate method, your GLView respond for it's layer.

Using a texture cache is much, much faster than glReadPixels: that will probably be a bottleneck.

Sam

Sam Ballantyne
  • 487
  • 6
  • 18