1

A while back I posted a stack overflow question for inserting images into a video composition and eventually created a solution of converting my images into a separate video and splicing them with the original video.

(Link to the original question for reference/context: Mixing Images and Video using AVFoundation )

This solution seemed to work pretty well. However today I found out that on iOS 8.1 that this solution breaks down. The final exported video only consists of the original video with gaps in it where my inserted image video is supposed to go.

I've tried substituting the image video for another one but that didn't have any effect either. I've looked over the API difs apple have provided but they've not proved to be that helpful. I did notice that the marked things like CMTime and CMTimeAdd as modified in the CoreMedia modules but they don't really say how.

Does anyone know what might have changed to break this, or have any suggestions?

Community
  • 1
  • 1
Tom Haygarth
  • 111
  • 1
  • 9

1 Answers1

0

Solved the issue.

Turns out on iOS 7.1 this is fine

AVAsset* frameVideo = [[AVURLAsset alloc] initWithURL:vidURL options:options];
gFramesTrack = [gFrameVideoAsset tracksWithMediaType:AVMediaTypeVideo][0];
[frameVideo release];

but on iOS 8.1 releasing the frameVideo object also stopped the AVAssetWriter/Reader from inputting the data the track should've had

So to fix the issue I don't call release on the frame video until after I'm done creating my final video.

Tom Haygarth
  • 111
  • 1
  • 9