1

I'm writing a video effect in iOS using Metal that requires pixel data of the current frame as well as many previous frames of the original input video. Is there a common pattern or best practice for how to achieve this type of shader that makes the most efficient use of memory and the GPU?

Ian Pearce
  • 143
  • 1
  • 10
  • My current implementation involves caching previous input frames as CIImages in an array that behaves as a stack. On each compute step a texture is created for every input frame in the stack and these textures are passed to my shader as a 2d texture array. – Ian Pearce Mar 09 '19 at 20:18
  • You are going to need to limit the number of previous frames because with video it will only take a few frames to consume all the memory on the device. Also be very careful with CIImage refs, you don't want to generate intermediate textures that use 16 or 32 bit floats for each pixel as that would 2x or 4x your memory usage. – MoDJ Mar 10 '19 at 22:49

0 Answers0