0

So let's say I want to make a movie from images. I'm told to use AVAssetWriter, along with an AVAssetWriterInput to append CVPixelBuffer objects. But I'm very confused.

Why do we create the pixel buffers, only to create a bitmap context to make a movie, and then draw using drawRectInViewHierarchy?

nynohu
  • 1,628
  • 12
  • 12
rocky raccoon
  • 514
  • 1
  • 8
  • 23
  • 1
    You either input frames with camera/file add a AVDataOutput, this will output CVPixelBuffers which you can do what you want with, apply filters in real time? transform them, anything – Sean Lintern Jan 27 '17 at 11:03
  • Is this a question about `CVBufferPool`? Your question doesn't reference it at all. – Rhythmic Fistman Jan 29 '17 at 22:22

1 Answers1

1

I'm not sure what information you're basing your question on, but I'll try to explain the basics.

First, CVPixelBuffer is a CoreVideo object that stores image data. All of the AVFoundation classes that deal with image data use objects of this type. However, a CVPixelBuffer is not a simple object to construct, you can't simply instantiate one from a blob of JPEG or PNG data.

One possible way of creating a CVPixelBuffer is by calling CVPixelBufferCreateWithBytes from a CGImageDataProvider. There are potentially other solutions that might work and/or be more efficient. It depends on what kind of images you're starting with.

Dave Weston
  • 6,527
  • 1
  • 29
  • 44