0

I'm trying to alpha blend some layers: [CGImageRef] in the drawLayer(thisLayer: CALayer!, inContext ctx: CGContext!) routine of my custom NSView. Until now I used CGContextDrawImage() for drawing those layers into the drawLayer context. While profiling I noticed CGContextDrawImage() needs 70% of the CPU time so I decided to try the Accelerate framework. I changed the code but it just crashes and I have no clue what the reason could be.

I'm creating those layers like this:

func addLayer() {
    let colorSpace: CGColorSpaceRef = CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB)
    let bitmapInfo = CGBitmapInfo(CGImageAlphaInfo.PremultipliedFirst.rawValue)

    var layerContext = CGBitmapContextCreate(nil, UInt(canvasSize.width), UInt(canvasSize.height), 8, UInt(canvasSize.width * 4), colorSpace, bitmapInfo)
    var newLayer = CGBitmapContextCreateImage(layerContext)

    layers.append( newLayer )
}

My drawLayers routine looks like this:

override func drawLayer(thisLayer: CALayer!, inContext ctx: CGContext!)
{
    var ctxImageBuffer = vImage_Buffer(data:CGBitmapContextGetData(ctx),
        height:CGBitmapContextGetHeight(ctx),
        width:CGBitmapContextGetWidth(ctx),
        rowBytes:CGBitmapContextGetBytesPerRow(ctx))
    
    for imageLayer in layers
    {
        //CGContextDrawImage(ctx, CGRect(origin: frameOffset, size: canvasSize), imageLayer)
        
        var inProvider:CGDataProviderRef = CGImageGetDataProvider(imageLayer)
        var inBitmapData:CFDataRef = CGDataProviderCopyData(inProvider)
        var buffer:vImage_Buffer = vImage_Buffer(data: &inBitmapData, height:
            CGImageGetHeight(imageLayer), width: CGImageGetWidth(imageLayer), rowBytes:
            CGImageGetBytesPerRow(imageLayer))
            
        vImageAlphaBlend_ARGB8888(&buffer, &ctxImageBuffer, &ctxImageBuffer, 0)
    }
}

the canvasSize is allways the same and also all the layers have the same size, so I don't understand why the last line crashes.

Also I don't see how to use the new convenience functions to create vImageBuffers directly from CGLayerRefs. That's why I do it the complicated way.

Any help appreciated.

EDIT

inBitmapData indeed holds pixel data that reflect the background color I set. However the debugger can not po &inBitmapData and fails with this message:

error: reference to 'CFData' not used to initialize a inout parameter &inBitmapData

So I looked for a way to get the pointer to inBitmapData. That is what I came up with:

var bitmapPtr: UnsafeMutablePointer<CFDataRef> = UnsafeMutablePointer<CFDataRef>.alloc(1)
bitmapPtr.initialize(inBitmapData)

I also had to change the way to point at my data for both buffers that i need for the alpha blend input. Now it's not crashing anymore and luckily the speed boost is inspectable with a profiler (vImageAlphaBlend only takes about a third of CGContextDrawImage), but unfortunately the image results in a transparent image with pixel failures instead of the white image background.

So far I don't get any runtime errors anymore but since the result is not as expected I fear that I still don't use the alpha blend function correctly.

Community
  • 1
  • 1
Enie
  • 649
  • 5
  • 18

1 Answers1

3

vImage_Buffer.data should point to the CFData data (pixel data), not the CFDataRef.

Also, not all images store their data as four channel, 8-bit per channel data. If it turns out to be three channel or RGBA or monochrome, you may get more crashing or funny colors. Also, you have assumed that the raw image data is not premultiplied, which may not be a safe assumption.

You are better off using vImageBuffer_initWithCGImage so that you can guarantee the format and colorspace of the raw image data. A more specific question about that function might help us resolve your confusion about it.

Some CG calls fall back on vImage to do the work. Rewriting your code in this way might be unprofitable in such cases. Usually the right thing to do first is to look carefully at the backtraces in the CG call to try to understand why you are causing so much work for it. Often the answer is colorspace conversion. I would look carefully at the CGBitmapInfo and colorspace of the drawing surface and your images and see if there wasn't something I could do to get those to match up a bit better.

IIRC, CALayerRefs usually have their data in non cacheable storage for better GPU access. That could cause problems for the CPU. If the data is in a CALayerRef I would use CA to do the compositing. Also, I thought that CALayers are nearly always BGRA 8-bit premultiplied. If you are not going to use CA to do the compositing, then the right vImage function is probably vImagePremultipliedAlphaBlend_RGBA/BGRA8888.

Ian Ollmann
  • 1,592
  • 9
  • 16
  • First of all thank you for your help, unfortunately I couldn't solve my problem with it. CFDataRef is a typealias of CFData so it shouldn't cause a crash, I changed it to CFData anyway. All my layers are created with the addLayer function and therefore have the same four channels with premultiplied alpha. Before I posted my problem I used RGBA (.PremultipliedLast), but the swift compiler didn't recognise the vImagePremultipliedAlphaBlend_RGBA8888 macro so I switched to creating my layers with .PremultipliedFirst and also tried vImagePremultipliedAlphaBlend_ARGB8888 with that without any luck. – Enie Feb 03 '15 at 00:59
  • Also regarding the vImageBuffer_initWithCGImage function, could you point out how to use it? i already found a question about that over here: http://stackoverflow.com/questions/26755978/fatal-error-unexpectedly-found-nil-while-unwrapping-an-optional-value-while-u The last solution advises to create the vImageBuffer struct with nil as its data. I'm not quite sure if vImageBuffer_initWithCGImage can write the CGImage data into wherever nil points to. When I try to create an CGImage out of that vImageBuffer with vImageCreateCGImageFromBuffer the program crashes while trying with a bad access err – Enie Feb 03 '15 at 01:14
  • I'm shaky on Swift, but when you do &inBitmapData, does that return a pointer to the CFDataRef or does it return CFDataGetBytePtr(inBitmapData). If the former, you may be crashing because vImage is expecting a pointer to pixels, not a pointer to a CFDataRef object. In any case vImageBuffer_initWithCGImage and vImageBuffer_Init both take a allocated but uninitialized vImage_Buffer and fill out the contents. They also allocate storage for vImage_Buffer.data. You'll need to free(buffer.data) when you are done with it. (Still wondering why you are compositing CA layers in CG) – Ian Ollmann Feb 03 '15 at 03:41
  • The contents of my CGImage layers are constantly being altered. As far as I understand a layer backed view is not an option for that purpose. So I decided to use a layer hosting view and draw all my CGImages into one CALayer's context. that is the fastest solution I came up with by now but I'd still like to fasten everything up. The 'inout' operator on &inBitmapData gives me an UnsafeMutablePointer which is what the function asks for. CFDataGetBytePtr only returns an UnsafePointer which by the way also gives me a hard time when I try get the CFData to manipulate pixels manually. – Enie Feb 03 '15 at 10:27
  • Use the debugger to inspect memory pointed to by &inBitmapData. Does it look like pixels? Typically, every fourth byte for 8-bit four channel format would be 0xff for alpha. – Ian Ollmann Feb 05 '15 at 18:57
  • thanks that helped find the problem, i'll update the initial question. – Enie Feb 06 '15 at 12:11