For my current project I'm reading the main camera output of the iPhone. I'm then converting the pixelbuffer to a cached OpenGL texture through the method: CVOpenGLESTextureCacheCreateTextureFromImage
. This works great when processing camera frames that are used for previewing. Tested on different combinations with the iPhone 3GS, 4, 4S, iPod Touch (4th gen) and IOS5, IOS6.
But, for the actual final image, which has a very high resolution, this only works on these combinations:
- iPhone 3GS + IOS 5.1.1
- iPhone 4 + IOS 5.1.1
- iPhone 4S + IOS 6.0
- iPod Touch (4th gen) + IOS 5.0
And this doesn't work for: iPhone 4 + IOS6.
The exact error message in console:
Failed to create IOSurface image (texture)
2012-10-01 16:24:30.663 GLCameraRipple[676:907] Error at CVOpenGLESTextureCacheCreateTextureFromImage -6683
I've isolated this problem by changing the GLCameraRipple project from Apple. You can check out my version over here: http://lab.bitshiftcop.com/iosurface.zip
Here's how I add the stilloutput to the current session:
- (void)setupAVCapture
{
//-- Create CVOpenGLESTextureCacheRef for optimal CVImageBufferRef to GLES texture conversion.
CVReturn err = CVOpenGLESTextureCacheCreate(kCFAllocatorDefault, NULL, [EAGLContext currentContext], NULL, &_videoTextureCache);
if (err)
{
NSLog(@"Error at CVOpenGLESTextureCacheCreate %d", err);
return;
}
//-- Setup Capture Session.
_session = [[AVCaptureSession alloc] init];
[_session beginConfiguration];
//-- Set preset session size.
[_session setSessionPreset:_sessionPreset];
//-- Creata a video device and input from that Device. Add the input to the capture session.
AVCaptureDevice * videoDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
if(videoDevice == nil)
assert(0);
//-- Add the device to the session.
NSError *error;
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:videoDevice error:&error];
if(error)
assert(0);
[_session addInput:input];
//-- Create the output for the capture session.
AVCaptureVideoDataOutput * dataOutput = [[AVCaptureVideoDataOutput alloc] init];
[dataOutput setAlwaysDiscardsLateVideoFrames:YES]; // Probably want to set this to NO when recording
//-- Set to YUV420.
[dataOutput setVideoSettings:[NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA]
forKey:(id)kCVPixelBufferPixelFormatTypeKey]]; // Necessary for manual preview
// Set dispatch to be on the main thread so OpenGL can do things with the data
[dataOutput setSampleBufferDelegate:self queue:dispatch_get_main_queue()];
// Add still output
stillOutput = [[AVCaptureStillImageOutput alloc] init];
[stillOutput setOutputSettings:[NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA] forKey:(id)kCVPixelBufferPixelFormatTypeKey]];
if([_session canAddOutput:stillOutput]) [_session addOutput:stillOutput];
[_session addOutput:dataOutput];
[_session commitConfiguration];
[_session startRunning];
}
And here's how I capture the still output and process it:
- (void)capturePhoto
{
AVCaptureConnection *videoConnection = nil;
for (AVCaptureConnection *connection in stillOutput.connections) {
for (AVCaptureInputPort *port in [connection inputPorts]) {
if ([[port mediaType] isEqual:AVMediaTypeVideo] ) {
videoConnection = connection;
break;
}
}
if (videoConnection) { break; }
}
[stillOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler:
^(CMSampleBufferRef imageSampleBuffer, NSError *error) {
// Process hires image
[self captureOutput:stillOutput didOutputSampleBuffer:imageSampleBuffer fromConnection:videoConnection];
}];
}
Here's how the texture is created:
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
CVReturn err;
CVImageBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
size_t width = CVPixelBufferGetWidth(pixelBuffer);
size_t height = CVPixelBufferGetHeight(pixelBuffer);
if (!_videoTextureCache)
{
NSLog(@"No video texture cache");
return;
}
if (_ripple == nil ||
width != _textureWidth ||
height != _textureHeight)
{
_textureWidth = width;
_textureHeight = height;
_ripple = [[RippleModel alloc] initWithScreenWidth:_screenWidth
screenHeight:_screenHeight
meshFactor:_meshFactor
touchRadius:5
textureWidth:_textureWidth
textureHeight:_textureHeight];
[self setupBuffers];
}
[self cleanUpTextures];
NSLog(@"%zi x %zi", _textureWidth, _textureHeight);
// RGBA texture
glActiveTexture(GL_TEXTURE0);
err = CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault,
_videoTextureCache,
pixelBuffer,
NULL,
GL_TEXTURE_2D,
GL_RGBA,
_textureWidth,
_textureHeight,
GL_BGRA,
GL_UNSIGNED_BYTE,
0,
&_chromaTexture);
if (err)
{
NSLog(@"Error at CVOpenGLESTextureCacheCreateTextureFromImage %d", err);
}
glBindTexture(CVOpenGLESTextureGetTarget(_chromaTexture), CVOpenGLESTextureGetName(_chromaTexture));
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
}
Any suggestions for a solution to this problem?