0

i have a large NSView that shows a custom Quartz2D drawing which repeatedly changes at high frame rates. Only some parts of drawing may change from frame to frame though. My approach so far is first drawing into an offscreen bitmap context, then creating an image from that context, and finally updating the content of the view's CoreAnimation layer with that image.

My first question is if that approach generally makes sense and if it's the way to go when it comes to performance?

Drawing to the offscreen bitmap context works fast enough and is optimised to redraw dirty areas only. So after that step i have a set of rectangles which mark regions in the offscreen buffer that should be displayed the screen. For now i simply update the contents of the CoreAnimation layer with an image created from the offscreen bitmap context, which basically works as well but i get flickering, it looks like new frames are shortly displayed on the screen while being not completely (or not at all) drawn yet. I have played around with CATransaction lock/unlock/begin/end/flush, NSView lockFocus/unlockFocus, NSDisableScreenUpdates/NSEnableScreenUpdates but didn't find a way to get around the flickering yet, so i was wondering what is the actually the correct sequence to get the synchronisation right?

Here is a sketch of the initialisation code:

NSView* theView = ...

CALayer* layer = [[CALayer new] autorelease];
layer.actions = [NSDictionary dictionaryWithObject:[NSNull null] forKey:@"contents"];
[theView setLayer: layer];
[theView setWantsLayer: YES];

// bitmapContext gets re-created when the view size increases.
CGContextRef bitmapContext = CGBitmapContextCreate(...);

And here a sketch of the drawing code:

CGRect[] dirtyRegions = ...

NSDisableScreenUpdates();

[CATransaction begin];
[CATransaction setDisableActions: YES];

// draw into dirty regions of bitmapContext 
// ...

// create image from bitmap context
void* buffer = CGBitmapContextGetData(bitmapContext); 
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer, ...);
CGImageRef image = CGImageCreate(..., provider, ...);

// update layer contents, dirty regions are ignored
layer.contents = image;

[CATransaction commit];

NSEnableScreenUpdates();

I would also like to take advantage of the knowledge about the dirty regions. Is there a way to update only the dirty regions on the screen using this approach?

Thanks for your help!

UPDATE: I think i found the problem that causes the flickering. I create the image with the pixel buffer from the bitmap context using CGImageCreate(...). If i use CGBitmapContextCreateImage(...) instead it works. CGBitmapContextCreateImage does copy on write, so it writes the pixels when bitmap context is updated again if i understand correctly, that would explain why it didn't work earlier. I've read somewhere that CGBitmapContextCreateImage should be used carefully because it makes calls to the Kernel that might affect performance, so i guess i will simply copy the relevant pixels into a new image buffer, taking the dirty regions into account. Does this make sense?

vosc
  • 108
  • 8
  • You probably don't need to worry about dirty regions, but instead should build on @Tommy's answer... Either call `-setNeedsDisplay` after your contents have changed and let the display system call you for updated content, or synchronize your drawing updates to the screen refresh using `CVDisplayLink` – nielsbot Aug 29 '13 at 00:27
  • (My comment applies except if your drawing is really really large) You could profile your code to find the bottleneck. – nielsbot Aug 29 '13 at 00:28

2 Answers2

1

The "normal" way is to work the other way around — call CALayer -setNeedsDisplay to indicate when a change in contents is available and respond to -drawInContext: to draw on demand. So you allow layer contents to be pulled from you, you don't push them.

I have to admit that I'm very surprised you get tearing while trying to push, but supposing you started from the simplest thing of just layer.contents = image and all the extra complexity you've added with transactions and the screen update lock are attempts to work around tearing, and you're absolutely sure you're not making problems by overcomplicating your code, what you should probably do is queue your updates, create a CVDisplayLink and then push any pending updates only when the relevant display or displays is about to be updated. It's basically the same approach as updating only during the vertical retrace on an old CRT-based output.

Tommy
  • 99,986
  • 12
  • 185
  • 204
  • Thanks a lot for your answer! I tried the other way round as well, it does not make a difference regarding the tearing. The reason why i got away from the setNeedsDisplay approach is that while making my first steps with CoreAnimation i was not aware that i need to re-create the CGImageRef and update the layer contents each time when the source bitmap context changes, since both are created with the same data buffer. Also i was not sure if that takes away control over the frame rate... The CVDisplayLink sounds interesting, will look into it, thanks! – vosc Aug 29 '13 at 00:48
1

After trying out a lot of different approaches, i dropped using CoreAnimation for uploading pixel data and decided to go with CoreVideo pixel buffers (CVPixelBufferRef) in combination with OpenGL for moving the pixels on screen instead.

CoreVideo provides some convenient functions to create OpenGL textures from pixel buffers (CVOpenGLTextureCacheCreateTextureFromImage), manage them in a texture cache (CVOpenGLTextureCacheRef), and to draw into the buffer safely (CVPixelBufferLockBaseAddress/CVPixelBufferUnlockBaseAddress). Uploading the dirty rectangles to the window back-buffer can then be done with normal OpenGL texture mapping commands (glTexCoord2fv).

Another approach that works equally well and has a similar API is IOSurface, more information about this is here.

vosc
  • 108
  • 8