3

I've got a (CMSampleBufferRef)imageBuffer which is type of yuv_nv12(4:2:0).

Now I run the following code,and find the result is confusing.

UInt8 *baseSRC = (UInt8 *)CVPixelBufferGetBaseAddress(imageBuffer);
UInt8 *yBaseSRC = (UInt8 *)CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0);
UInt8 *uvBaseSRC = (UInt8 *)CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 1);
int width = (int)CVPixelBufferGetWidthOfPlane(imageBuffer, 0);    //width = 480;
int height = (int)CVPixelBufferGetHeightOfPlane(imageBuffer, 0);  //height = 360;

int y_base = yBaseSRC - baseSRC;     //y_base = 64;
int uv_y = uvBaseSRC-yBaseSRC;       //uv_y = 176640;
int delta = uv_y - width*height;     //delta = 3840;

I have a few questions about this result.

1: Why isn't baseSRC equal to yBaseSRC?

2: Why isn't yBaseSRC+width*height equal to uvBaseSRC? Theoretically,y plane data is followed by uv plane data without any interruptions,right? Now it is interupted by something whose size is 3840 bytes,I don't get it.

3: I try to convert this sample pixel to cvmat with the following code,on most of iOS devices,this works properly, but not on iPhone 4s.On iPhone 4s, after conversion,the pixel buffer gets some green lines on the side.

Mat nv12Mat(height*1.5,width,CV_8UC1,(unsigned char *)yBaseSRC);
Mat rgbMat;
cvtColor(nv12Mat, rgbMat, CV_YUV2RGB_NV12);

Now rgbMat looks like this:

enter image description here

Li Fumin
  • 1,383
  • 2
  • 15
  • 31
  • 1
    CVPixelBuffers are opaque types, and I would deduce that the delta you found is data that is part of the CVPixelBuffer data type. As the data type is not documented and Apple provides the apis to extract the data you need, I would not make assumptions as to how data should be organized. As for the opencv mat, why are you using a height of of (height * 1.5). You should just pass 'height'. – blackirishman Mar 22 '16 at 03:38
  • @blackirishman Because 'height' is the height of y plane, the uv plane's height is equal to 'height/2.0', so the total size of the buffer should be 'width*height+width*height/2.0',which is the same meaning of 'width*height*1.5'. – Li Fumin Mar 22 '16 at 03:45
  • When you first wrote this you were creating a Mat of the of y plane, right? Your call to CVPixelBufferGetWidthOfPlane and CVPixelBufferGetHeightOfPlane give you the width and height that you should enter into your Mat. – blackirishman Mar 22 '16 at 04:01
  • @blackirishman No, I was creating a Mat of all planes(including y,u,v). – Li Fumin Mar 22 '16 at 04:04
  • The green data has to be whatever is in your CVPixelBuffer. The delta divided by height is 10.66, that looks like 10 pixels in the image you just posted. – blackirishman Mar 22 '16 at 04:04
  • You updated your post so it wasn't clear. – blackirishman Mar 22 '16 at 04:04
  • This conversion works fine on other iOS devices,it only malfunctions on iPhone 4s. – Li Fumin Mar 22 '16 at 04:13
  • Let us [continue this discussion in chat](http://chat.stackoverflow.com/rooms/106965/discussion-between-blackirishman-and-li-fumin). – blackirishman Mar 22 '16 at 04:13

1 Answers1

2

Finally found the solution,basically the solution is to allocate a new piece of memory,and concatenate y plane data and uv plane data.And then it works fine to being converted to a cvmat.

Here is the code snippet:

UInt8 *newBase = (UInt8 *)malloc(landscapeWidth*landscapeHeight*1.5);
memcpy(newBase, yBaseSRC, landscapeWidth*landscapeHeight);
memcpy(newBase+landscapeWidth*landscapeHeight, uvBaseSRC, landscapeWidth*landscapeHeight*0.5);
Mat nv12Mat(landscapeHeight*1.5,landscapeWidth,CV_8UC1,(unsigned char *)newBase);
Li Fumin
  • 1,383
  • 2
  • 15
  • 31