0

I'm adding three clips to an AVMutableComposition like this...

    let asset = AVURLAsset(url: url, options: [ AVURLAssetPreferPreciseDurationAndTimingKey : true ])

    let track = composition.addMutableTrack(withMediaType: .video,
                                            preferredTrackID: Int32(kCMPersistentTrackID_Invalid))

    do {

      try track?.insertTimeRange(CMTimeRangeMake(start: CMTime.zero, duration: asset.duration),
                                 of: asset.tracks(withMediaType: .video)[0],
                                 at: composition.duration)
    } catch {

      print("Failed to load track")
    }

So, that's the full duration of each asset, added at the current end of the composition. I've tried different orders and there's always one flash of the background in between two clips.

I've tried setting at: to be the sum of the previously added clip's duration but it doesn't change the final result.

I'm also setting the tracks to zero opacity when they've finished so that they don't cover up subsequent tracks.

 instruction.setOpacity(0.0, at: composition.duration)

Possibly, the opacity is switching on too early. But this instruction also uses the asset's duration - it's the same data.

I've looked in the debugger and when there's a gap between two clips, the second clip start value is exactly the same as the previous clips' duration (where the first clip starts at 0), and the opacity is also switched on at the same time value. So it looks like I'm at least feeding the composition correct data.

How do I remove the gap?

Ian Warburton
  • 15,170
  • 23
  • 107
  • 189
  • Could it have to do with the use of multiple video tracks? Also, could it have to do with a lack of precision (see `providesPreciseDurationAndTiming`)? – matt Jun 08 '20 at 17:39
  • I've added `AVURLAssetPreferPreciseDurationAndTimingKey` to the composition and the source assets (the latter is in the code in the question). Multiple composition tracks seems to be correct. See the diagram at the top here... https://developer.apple.com/library/archive/documentation/AudioVideo/Conceptual/AVFoundationPG/Articles/03_Editing.html#//apple_ref/doc/uid/TP40010188-CH8-SW2 Although, in the section 'Adding Audiovisual Data to a Composition', it uses a single video track. But if you use a single track then how to you target instructions to different clips? – Ian Warburton Jun 08 '20 at 18:00
  • Aha... it says, "Where possible, you should have only one composition track for each media type." and "When presenting media data serially, you should place any media data of the same type on the same composition track." So looks like I was barking up the wrong tree. – Ian Warburton Jun 08 '20 at 20:21
  • I guess what I’m saying is, I’ve composed mutable compositions out of successive video clips and I’ve never had this issue, so try doing it the simple way first just to prove it can work with a few short successive clips, and then start migrating towards your more complex case and your real data and see which feature is breaking things... – matt Jun 08 '20 at 21:27
  • I think two of the clips are corrupt - they're Insta stories I downloaded. Even with all three clips on one track, I get errors when the two clips are next to one another. The first plays ok, then the second is either frozen or glitches, depending on the clip. During the glitching, it repeatedly says... `{OptimizedCabacDecoder::UpdateBitStreamPtr} bitstream parsing error!!!!!!!!!!!!!!` There's no problem with the third clip, taken on my phone. My suspicions were raised when the Insta stories didn't have a rotation despite being in portrait. I will experiment tomorrow to confirm. – Ian Warburton Jun 08 '20 at 23:07
  • Also, the Insta stories weren't even video - they're still images with GIFs pasted on them. lol, perhaps not the best media to start with. – Ian Warburton Jun 08 '20 at 23:15

0 Answers0