The documentation on this library is essentially non-existent, so I really need your help here.
Goal: I need H264 encoding (preferably with both audio and video, but just video is fine and I'll just play around a few days to get audio to work too) so I can pass it into a MPEG transport stream.
What I have: I have a camera that records and outputs sample buffers. Inputs are camera back and built-in mic.
A few questions: A. Is it possible to get the camera to output CMSampleBuffers in H264 format?I mean, the 2014 has it being produced from VTCompressionSessions but while writing my captureOutput, I see that I already get a CMSampleBuffer... B. How do I set up a VTCompressionSession? How is the session used? Some overarching top-level discussion about this might help people understand what's actually going on in this barely documented library.
Code here (please ask for more if you need it; I'm only putting captureOutput because I don't know how relevant the rest of the code is):
func captureOutput(captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, fromConnection connection: AVCaptureConnection!) {
println(CMSampleBufferGetFormatDescription(sampleBuffer))
var imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
if imageBuffer != nil {
var pixelBuffer = imageBuffer as CVPixelBufferRef
var timeStamp = CMSampleBufferGetPresentationTimeStamp(sampleBuffer as CMSampleBufferRef)
//Do some VTCompressionSession stuff
}
}
Thanks all!