3

I want to read in a video asset on disk and a bunch of processing on it, things like using a CICropFilter on each individual frame and cutting out a mask, splitting up one video into several smaller videos, and removing frames from the original track to "compress" it down and make it more gif-like.

I've come up with a few possible avenues:

  1. AVAssetWriter and AVAssetReader

In this scenario, I would read in the CMSampleBuffers from file, perform my desired manipulations, then write back to a new file using AVAssetWriter.

  1. AVMutableComposition

Here, given a list of CMTimes I can easily cut out frames and rewrite the video or even create multiple compositions for each new video I want to create, then export all of them using AVAssetExportSession.

The metrics I'm concerned about: performance and power. That is to say I'm interested in the method that offers the greatest efficiency in performing my edits while also giving me the flexibility to do what I want. I'd imagine the kind of video editing I'm describing can be done with both approaches but really I want the most performant/with the best capabilities.

barndog
  • 6,975
  • 8
  • 53
  • 105

1 Answers1

2

In my experience AVAssetExportSession is slightly more performant than using AVAssetReader and AVAssetWriter for a straight forward format A -> format B type conversion, however that said, it's probably not by enough to be too concerned about.

According to Apple's own documentation https://developer.apple.com/library/ios/documentation/AudioVideo/Conceptual/AVFoundationPG/Articles/00_Introduction.html#//apple_ref/doc/uid/TP40010188:

You use an export session to reencode an existing asset into a format defined by one of a small number of commonly-used presets. If you need more control over the transformation, in iOS 4.1 and later you can use an asset reader and asset writer object in tandem to convert an asset from one representation to another. Using these objects you can, for example, choose which of the tracks you want to be represented in the output file, specify your own output format, or modify the asset during the conversion process.

Given the nature of your question, it seems like you don't have much experience with the AVFoundation framework yet. My advice is to start with AVAssetExportSession and then when you hit a road block, move deeper down the stack into AVAssetReader and AVAssetWriter.

Eventually, depending on how far you take this, you may even want to write your own Custom Compositor.

Tim Bull
  • 2,375
  • 21
  • 25
  • I actually do have a lot of experience using at least the upper level of AVFoundation, not so much with things like custom compositors. If I wanted to read in a movie file and drop all the sample buffers that failed face detection (using `CIDetector`) then write those buffers to a new file (with adjusted time stamps), what would you recommend I use @Tim Bull? AVAssetWriter or a custom compositor? – barndog Apr 19 '16 at 22:52
  • It's not an either or. Start with AVAssetWriter and if you get to the point where you need a custom compositor, you can implement it then. – Tim Bull Apr 19 '16 at 22:57
  • Basically if you're just doing CIFilters on a video frame, then AVAssetWriter is fine. If you're becoming concerned about how you combine frames from track A and track B at the same time stamp in interesting ways, then you're starting to get into a custom compositor. Specifically answering your question - for what you're talking about, AVAssetWrite should be fine. – Tim Bull Apr 19 '16 at 22:58
  • Hmmm yeah it seems that's the case. Are there any good resources on how to make a custom compositor? I've been looking but I can't find any and that seems like the direction I want to go. – barndog Apr 19 '16 at 23:02
  • My other concern is taking one video and splitting it into multiple videos, something AVAssetWriter seems more suited for than a custom compositor would. – barndog Apr 19 '16 at 23:03
  • 1
    I think you're confusing the two. AVAssetWriter is concerned with writing rendered buffers to disk. AVVideoCompositingProtocol https://developer.apple.com/library/mac/documentation/AVFoundation/Reference/AVVideoCompositing_Protocol/ is implemented by a custom compositor and allows you to access the tracks BEFORE they are rendered into a buffer and control exactly how the tracks are combined. Without a CustomCompositor you can manipulate the resulting output of TrackA and TrackB combined (do what you want with it), but with a CustomCompositor you can control HOW they are combined. – Tim Bull Apr 19 '16 at 23:52
  • Let us [continue this discussion in chat](http://chat.stackoverflow.com/rooms/109605/discussion-between-tim-bull-and-startupthekid). – Tim Bull Apr 19 '16 at 23:56
  • @TimBull I notice that you have a lot experience in AVFoundation and I need help. I use AVVideoCompositionCoreAnimationTool to create a video but the exportation its too slow, maybe you can check my question and help me! Thanks a lot! http://stackoverflow.com/questions/41170456/create-video-with-avvideocompositioncoreanimationtool-and-avassetexportsession-s – Carolina Dec 18 '16 at 01:03