0

sorry, for this question, I know there is a similar question, but I can not get the answer to work. Probably some dumb error on my side ;-)

I want to overlay two images with Alpha on iOS. The images taken from two videos, read by an AssetReader and stored in two CVPixelBuffer. I know that the Alpha channel is not stored in the video, so I get it from a third file. All data looks fine. The Problem is the overlay, if I do it onscreen with [CIContext drawImage] everything is fine ! But if I do it offscreen because the format of the video is not identical to the screen format, I can not get it to work: 1. drawImage does work, but only on-screen 2. render:toCVPixelBuffer works, but ignores Alpha 3. CGContextDrawImage seems to do nothing at all (not even an error message)

So can somebody give me an idea what is wrong:

Init: ...(a lot of code before) Setup color space and bitmap context

                   if(outputContext)
                    {
                        CGContextRelease(outputContext);
                        CGColorSpaceRelease(outputColorSpace);
                    }
                    outputColorSpace = CGColorSpaceCreateDeviceRGB();
                    outputContext =   CGBitmapContextCreate(CVPixelBufferGetBaseAddress(pixelBuffer), videoFormatSize.width, videoFormatSize.height, 8, CVPixelBufferGetBytesPerRow(pixelBuffer), outputColorSpace,(CGBitmapInfo) kCGBitmapByteOrderDefault |kCGImageAlphaPremultipliedFirst);

... (a lot code after)

Drawing:

CIImage *backImageFromSample;
CGImageRef frontImageFromSample;
CVImageBufferRef nextImageBuffer = myPixelBufferArray[0];
CMSampleBufferRef sampleBuffer = NULL;
CMSampleTimingInfo timingInfo;

//draw the frame
CGRect toRect;
toRect.origin.x = 0;
toRect.origin.y = 0;
toRect.size = videoFormatSize;

//background image always full size, this part seems to work
if(drawBack)
{
    CVPixelBufferLockBaseAddress( backImageBuffer,  kCVPixelBufferLock_ReadOnly );
    backImageFromSample = [CIImage imageWithCVPixelBuffer:backImageBuffer];
    [coreImageContext render:backImageFromSample toCVPixelBuffer:nextImageBuffer bounds:toRect colorSpace:rgbSpace];
    CVPixelBufferUnlockBaseAddress( backImageBuffer,  kCVPixelBufferLock_ReadOnly );
}
else
    [self clearBuffer:nextImageBuffer];
//Front image doesn't seem to do anything
if(drawFront)
{
    unsigned long int numBytes = CVPixelBufferGetBytesPerRow(frontImageBuffer)*CVPixelBufferGetHeight(frontImageBuffer);
    CVPixelBufferLockBaseAddress( frontImageBuffer,  kCVPixelBufferLock_ReadOnly );

    CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, CVPixelBufferGetBaseAddress(frontImageBuffer), numBytes, NULL);
    frontImageFromSample = CGImageCreate (CVPixelBufferGetWidth(frontImageBuffer) , CVPixelBufferGetHeight(frontImageBuffer), 8, 32, CVPixelBufferGetBytesPerRow(frontImageBuffer), outputColorSpace, (CGBitmapInfo) kCGBitmapByteOrderDefault | kCGImageAlphaPremultipliedFirst, provider, NULL, NO, kCGRenderingIntentDefault);
    CGContextDrawImage ( outputContext, inrect, frontImageFromSample);
    CVPixelBufferUnlockBaseAddress( frontImageBuffer, kCVPixelBufferLock_ReadOnly );
    CGImageRelease(frontImageFromSample);
}

Any ideas anyone ?

  • If you are looking for a possibly better way to store alpha videos in your app bundle, then have a peek at: http://stackoverflow.com/a/21079559/763355 – MoDJ Feb 26 '16 at 02:47
  • Yes, I have seen the examples that split the Video with Alpha into two videos one color + one greyscale, but I decided against it. The idea behind my implementation was to use the GPU for the encoding of the video and the CPU for the encoding of the mask. This allows me to do this in realtime even from the camera... – BetaVersion Mar 03 '16 at 20:18

1 Answers1

1

So obviously I should stop to ask questions on stackflow. Every time I do that after hours of debugging I find the answer myself shortly afterwards. Sorry for that. The problem is in the initialisation, you can't do CVPixelBufferGetBaseAddress without locking the adresss first O_o. The adress gets NULL and this seems to be allowed, with the action then beeing not to do anything. So the correct code is:

                if(outputContext)
                {
                    CGContextRelease(outputContext);
                    CGColorSpaceRelease(outputColorSpace);
                }
                CVPixelBufferLockBaseAddress(pixelBuffer);
                outputColorSpace = CGColorSpaceCreateDeviceRGB();
                outputContext =   CGBitmapContextCreate(CVPixelBufferGetBaseAddress(pixelBuffer), videoFormatSize.width, videoFormatSize.height, 8, CVPixelBufferGetBytesPerRow(pixelBuffer), outputColorSpace,(CGBitmapInfo) kCGBitmapByteOrderDefault |kCGImageAlphaPremultipliedFirst);
                CVPixelBufferUnlockBaseAddress(pixelBuffer);