0

I am using this code to create a cv::Mat from a CMSampleBufferRef. It works fine for AVCaptureSessionPreset's (High, Medium, Low) but when I use AVCaptureSessionPresetPhoto it outputs a garbled image.

- (cv::Mat) matFromSampleBuffer:(CMSampleBufferRef) sampleBuffer {
    CVImageBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);

    CVPixelBufferLockBaseAddress( pixelBuffer, 0 );

    int bufferWidth = CVPixelBufferGetWidth(pixelBuffer);
    int bufferHeight = CVPixelBufferGetHeight(pixelBuffer);
    unsigned char *pixel = (unsigned char *)CVPixelBufferGetBaseAddress(pixelBuffer);

    cv::Mat mat(bufferHeight,bufferWidth,CV_8UC4,pixel);

    CVPixelBufferUnlockBaseAddress( pixelBuffer, 0 );

    return mat;
}

When I use AVCaptureSessionPresetPhoto it produces:

garbled

But other profiles create a cv:Mat just fine. Can someone help?

az_
  • 51
  • 3
  • It appears that in this case the interval between rows seems to be 12 pixels more than what's indicated -- 864 rather than 852. (seems like the nearest higher multiple of 16) | I'm not at all familiar with ios, but what does [`CVPixelBufferGetBytesPerRow`](https://developer.apple.com/documentation/corevideo/1456964-cvpixelbuffergetbytesperrow) return in this case? What about `CVPixelBufferGetDataSize`? – Dan Mašek Jul 28 '17 at 16:01

1 Answers1

0

Solved, I was missing step size in bytes:

- (cv::Mat) matFromSampleBuffer:(CMSampleBufferRef) sampleBuffer {
    CVImageBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);

    CVPixelBufferLockBaseAddress( pixelBuffer, 0 );

    int bufferWidth = CVPixelBufferGetWidth(pixelBuffer);
    int bufferHeight = CVPixelBufferGetHeight(pixelBuffer);
    int bytesPerRow = CVPixelBufferGetBytesPerRow(pixelBuffer);
    unsigned char *pixel = (unsigned char *)CVPixelBufferGetBaseAddress(pixelBuffer);

    cv::Mat mat(bufferHeight,bufferWidth,CV_8UC4,pixel,bytesPerRow);

    CVPixelBufferUnlockBaseAddress( pixelBuffer, 0 );

    return mat;
}
az_
  • 51
  • 3