After creating an AVComposition by adding several videos in sequence to a video track (like this, not like this), I want to access the result with an AVAssetReader (in order to use the video content in an OpenGL environment).
First I simply tried to create an AVAssetReader from my AVComposition (which is derived from AVAsset) with assetReaderWithAsset:
, and then add the AVComposition's video track (tracksWithMediaType:AVMediaTypeVideo
) to the AVAssetReader via an AVAssetReaderTrackOutput
. But if I do this, it looks like the AVAssetReader only reads the first video item and then stops returning new data, as if it can only find the first item.
The Apple documentation says:
You use the AVAssetReaderAudioMixOutput and AVAssetReaderVideoCompositionOutput classes to read media data that has been mixed or composited together using an AVAudioMix object or AVVideoComposition object, respectively. Typically, these outputs are used when your asset reader is reading from an AVComposition object.
Note that AVVideoComposition is not derived from AVComposition, so it is not possible to use AVAssetReaderVideoCompositionOutput with an AVComposition object.
Does this mean that in order to properly read data from an AVComposition, I always need to first somehow construct an AVVideoComposition object and associate this with the AVAssetReader in order to read from an AVComposition? Note that AVVideoComposition is used for tricks like picture in picture effects, fades, etc, which I don't need, I just want single video items inserted after each other.