I have a raw bitmap image of RGBA malloc-ed data; rows are obviously a multiple of 4 bytes. This data actually originates from an AVI (24-bit BGR format), but I convert it to 32-bit ARGB. There's about 8mb of 32-bit data (1920x1080) per frame.
For each frame:
- I convert that frame's data into a
NSData
object viaNSData:initWithBytes:length
. - I then convert that into a
CIImage
object viaCIImage:imageWithBitmapData:bytesPerRow:size:format:colorSpace
. - From that
CIImage
, I draw it into my finalNSOpenGLView
context usingNSOpenGLView:drawImage:inRect:fromRect
. Due to the "mosaic" nature of the target images, there are approximately 15-20 calls made on this with various source/destination Rects.
Using a 30hz NSTimer
that calls [self setNeedsDisplay:YES]
on the NSOpenGLView
, I can attain about 20-25fps on a 2012 MacMini/2.6ghz/i7 -- it's not rock solid at 30hz. This to be expected with an NSTimer
instead of a CVDisplayLink
.
But... ignoring the NSTimer
issue for now, are there any suggestions/pointers on making this frame-by-frame rendering a little more efficient?
Thanks!
NB: I would like to stick with CIImage
objects as I'll want to access transition effects at some point.