17

I'm trying to develop an iPhone app that will use the camera to record only the last few minutes/seconds. For example, you record some movie for 5 minutes click "save", and only the last 30s will be saved. I don't want to actually record five minutes and then chop last 30s (this wont work for me). This idea is called "Loop recording".

This results in an endless video recording, but you remember only last part. Precorder app do what I want to do. (I want use this feature in other context) I think this should be easily simulated with a Circular buffer. I started a project with AVFoundation. It would be awesome if I could somehow redirect video data to a circular buffer (which I will implement). I found information only on how to write it to a file.

I know I can chop video into intervals and save them, but saving it and restarting camera to record another part will take time and it is possible to lose some important moments in the movie.

Any clues how to redirect data from camera would be appreciated.

Community
  • 1
  • 1
Adam Szeptycki
  • 387
  • 1
  • 4
  • 12

1 Answers1

10

Important! As of iOS 8 you can use VTCompressionSession and have direct access to the NAL units instead of having to dig through the container.


Well luckily you can do this and I'll tell you how, but you're going to have to get your hands dirty with either the MP4 or MOV container. A helpful resource for this (though, more MOV-specific) is Apple's Quicktime File Format Introduction manual http://developer.apple.com/library/mac/#documentation/QuickTime/QTFF/QTFFPreface/qtffPreface.html#//apple_ref/doc/uid/TP40000939-CH202-TPXREF101

First thing's first, you're not going to be able to start your saved movie from an arbitrary point 30 seconds before the end of the recording, you'll have to use some I-Frame at approximately 30 seconds. Depending on what your Keyframe Interval is, it may be several seconds before or after that 30 second mark. You could use all I-frames and start from an arbitrary point, but then you'll probably want to re-encode the video afterward because it will be quite large.

SO knowing that, let's move on.

First step is when you set up your AVAssetWriter, you will want to set its AVAssetWriterInput's expectsMediaDataInRealTime property to YES.

In the captureOutput callback you'll be able to do an fread from the file you are writing to. The first fread will get you a little bit of MP4/MOV (whatever format you're using) header (i.e. 'ftyp' atom, 'wide' atom, and the beginning of the 'mdat' atom). You want what's inside the 'mdat' section. So the offset you'll start saving data from will be 36 or so.

Each read will get you 0 or more AVC NAL Units. You can find a listing of NAL unit types from ISO/IEC 14496-10 Table 7-1. They will be in a slightly different format than specified in Annex B, but it's fine. Additionally, there will only be IDR slices and non-IDR slices in the MP4/MOV file. IDR will be the I-Frame you're looking to hang onto.

The NAL unit format in the MP4/MOV container is as follows:

4 bytes - Size
[Size] bytes - NALU Data
data[0] & 0x1F - NALU Type

So now you have the data you're looking for. When you go to save this file, you'll have to update the MPV/MOV container with the correct length, sample count, you'll have to update the 'stsz' atom with the correct sizes for each sample and things like updating the media headers and track headers with the correct duration of the movie and so on. What I would probably recommend doing is creating a sample container on first run that you can more or less just overwrite/augment with the appropriate data for that particular movie. You'll want to do this because the encoders on the various iDevices don't all have the same settings and the 'avcC' atom contains encoder information.

You don't really need to know much about the AVC stream in this case, so you'll probably want to concentrate your experimenting around updating the container format you choose correctly. Good luck.

jgh
  • 2,017
  • 14
  • 13
  • Great Answer, can you please provide some demo code for this. It will be very helpful. – Salman Khakwani Dec 17 '14 at 12:55
  • 2
    https://github.com/jgh-/VideoCore/blob/master/transforms/Apple/H264Encode.mm shows an implementation of `VTCompressionSession`. Instead of pushing the NAL units down the pipe as in this code sample in `H264Encode::compressionSessionOutput` you could put them into a ring buffer for storage of 30 seconds or whatever. – jgh Dec 17 '14 at 17:50
  • @jgh i want add loop recording feature to my application but at the same time , i want to make a timed based buffer , which means this buffer should have capability to keep in buffer for 60 seconds , and the data buffer should clear accordingly ,and i should have capability to capture video from two specification location in buffer , this can be 15 seconds to 40 seconds , is there any way that i can make time based buffer – Mr.G Mar 17 '15 at 09:33
  • @Mr.G you could do this relatively easily VTCompressionSession since it will feed you samplebuffers containing all of the slices of each frame (so you don't have to parse through the slice headers to count the frames). You'll just have to make sure you grab your frames starting with an IDR frame (NALU type 5) which would mean they'd start at an interval of whatever you set your keyframe interval to be (i.e. every 2 seconds, every 4 seconds, etc...) – jgh Mar 27 '15 at 06:58
  • @jgh thanks for the reply , can i use https://github.com/jgh-/VideoCore/blob/master/transforms/Apple/H264Encode.mm (your sample) to this ?? – Mr.G Mar 27 '15 at 07:30
  • i have a question , lets say if i make a ring buffer and put media data for 60 seconds , woudnt that be a problem to phone memory ?? since video data frame will have large amount of bytes , @jgh – Mr.G Mar 28 '15 at 04:44
  • Depends on the data rate of the video. For example if you're encoding at 4Mbps, 60 seconds of video would work out to roughly 30 MB (4Mbps/8=500 KB/s * 60s = 3000 KB) – jgh Mar 28 '15 at 17:40
  • @jgh how can i record streaming video full length , when i uncomment the code : m_muxer = std::make_shared(); videocore::Apple::MP4SessionParameters_t parms(0.) ; std::string file = [[[self applicationDocumentsDirectory] stringByAppendingString:@"/output.mp4"] UTF8String]; parms.setData(file, self.fps, self.videoSize.width, self.videoSize.height); m_muxer->setSessionParameters(parms); m_aacSplit->setOutput(m_muxer); m_h264Split->setOutput(m_muxer); it shows error Field type 'videocore::Apple::MP4Multiplexer is an absract class – g212gs Aug 11 '15 at 23:41