I have a problem in displaying result onto the preivewLayer using iOS SDK. I would need a little advice from you in understanding how to "substitute" the captured frame with the processed one.
The fact is: I have a very standard AVCapture configured app mixed with OpenCV framework.
-(void)captureOutput... {
// I think that is here where I have to do operations... so
// for example a simple RGB->GRAY...
UIImage* img = [[UIImage alloc] initWithCMSampleBuffer:sampleBuffer];
cv::Mat m_img;
m_img = [img CVMat];
// you can do this using m_img both as a src and dst
cv::cvtColor( m_img, m_img, CV_BGR2GRAY );
img = [[UIImage alloc] initWithCvMat:m_img];
[previewLayer setContent:[img CGImage]];
}
Obviously this is not so correct. For example, the size of the new content is not correctly resized, while captured frame is right just because I set
[previewLayer setVideoGravity:AVLayerVideoGravityResizeAspect];
But the "biggest" issue is that my processed image is behind the captured frame and the video is quite fluid while my processed video (behind the frame) is not so fluid (even if I simply do nothing and I assign directly the same image).
Could anyone of you help me in understanding how to apply the processed image directly onto the previewLayer (using OpenCV in this case).
Thank you very much...