In Apple's documentation for AVAssetReaderTrackOutput
, it indicates the following about the parameter for outputSettings
when instantiating an instance using +[AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:outputSettings:]
A value of nil configures the output to vend samples in their original format as stored by the specified track.
When using it on e.g. an MP4 video asset, it will seemingly step through frames in decode order (i.e. out-of-order with respect to display), however all queries to delivered CMSampleBufferRef
objects using CMSampleBufferGetImageBuffer
yields NULL
CVImageBufferRef
objects.
The only way I can ensure delivery of image buffer objects is to provide a pixel buffer format to outputSettings:
, such as kCVPixelFormatType_32ARGB
for the kCVPixelBufferPixelFormatTypeKey
dictionary entry.
Another interesting side-effect of doing this, is that frames are then delivered in display order, and the underlying decode-order of frames is abstracted/hidden away.
Any ideas why this is so?