2

I have a number of images that would be shown in a slideshow and each image is displayed for 1 second or more using AVPlayer. I used the following technique before (around iOS 6 time) that I would like to improve upon.

  1. Downscaled each image to smaller size for playback,

  2. Made a CALayer, created CAAnimation for each image appearance and disappearance as per timing in the slideshow, and added those animations to the CALayer,

  3. Created a AVMutableComposition with a dummy video having a single black frame, expanded it's time range to duration of slideshow,

  4. Created an AVSynchronizedLayer and added this CALayer to it to synchronise with playback. For the rendering, I created an AVVideoComposition object and added use coreAnimationTool property of AVVideoComposition to render slideshow to a video file.

Now in the times of iOS 13 or later when we have customVideoCompositor in AVVideoComposition, can we improve on it using Core Image? Specifically, given 100 images stored in the file system where each could be full resolution 12 MP image, would it be correct to create 100 CIImages at the startup from the urls, apply downsample transform to each, and render them in custom compositor at runtime? I believe there may be a performance hit every time we load render a new CIImage stored on disk. There may be a need to preload an image just before load time and cancel loading if the user seeks the player to different time. What's the right/best way to implement this without crashing due to memory pressure or slow loading of images? I believe even Apple may be solving some of these issues when it plays an AVMutableComposition in AVPlayer.

Deepak Sharma
  • 5,577
  • 7
  • 55
  • 131

0 Answers0