I have a bi-planar raw video buffer data in YCbCr format, which I'm using as source to compress one new mp4/mov video in H.264 format on iPhone and iPad. Therefore, I'm using a CVPixelBufferRef
to make a new pixel buffer and then use an AVAssetWriterInputPixelBufferAdaptor
to append to the video writer.
However, when I'm calling appendPixelBuffer
with the new pixel buffer with YCbCr data in it, appendPixelBuffer
is only returning YES
for the first frame appended. All the other frames are denied by the writer (by returning a NO
). The problem is that if I use BGRA32 format raw video data, it's working just fine, so I wonder whether I have a wrongly created pixel buffer with YCbCr format.
I had two methods to create the pixel buffer:
1) Use CVPixelBufferCreateWithBytes
:
cvErr = CVPixelBufferCreateWithBytes(kCFAllocatorDefault,
videoWidth_,
videoHeight_,
kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange,
[videoFrame getBaseAddressOfPlane:0],
[videoFrame getBytesPerRowOfPlane:0],
NULL,
NULL,
NULL,
&pixelBuffer);
2) Use CVPixelBufferCreateWithPlanarBytes
:
cvErr = CVPixelBufferCreateWithPlanarBytes(kCFAllocatorDefault,
videoWidth_,
videoHeight_,
kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange,
NULL,
0,
videoFrame.planeCount,
planeBaseAddress,
planeWidth,
planeHeight,
planeStride,
NULL,
NULL,
NULL,
&pixelBuffer);
planeBaseAddress
, planeWidth
, planeHeight
, planeStride
are 2D arrays containing the base address, width, height and stride of the Y plane and the CbCr plane.
So can you show me where I'm doing wrong, or if there is some sample code I can refer to, or if this is iPhone SDK's issue?