I am capturing live video frames on iPhone and processing them to track in real-time color blobs using OpenCV. In an effort to minimize the processing time I discovered that the frame pure processing time depends on the preset FPS rate! I would have expected the processing time of an image within an OpenCV module to depend only on the image size and the OpenCV algorithm itself, wouldn't I?
setupCaptureSession:
- (void)setupCaptureSession
{
if ( _captureSession )
{
return;
}
_captureSession = [[AVCaptureSession alloc] init];
videoPreviewLayer = [AVCaptureVideoPreviewLayer layerWithSession:_captureSession];
AVCaptureDevice *videoDevice = [self frontCamera];
NSError *videoDeviceError = nil;
AVCaptureDeviceInput *videoIn = [[AVCaptureDeviceInput alloc] initWithDevice:videoDevice error:&videoDeviceError];
if ( [_captureSession canAddInput:videoIn] )
{
[_captureSession addInput:videoIn];
_videoDevice = videoDevice;
}
else
{
[self handleNonRecoverableCaptureSessionRuntimeError:videoDeviceError];
return;
}
AVCaptureVideoDataOutput *videoOut = [[AVCaptureVideoDataOutput alloc] init];
videoOut.videoSettings = [NSDictionary dictionaryWithObject:
[NSNumber numberWithInt:kCVPixelFormatType_32BGRA] forKey:(id)kCVPixelBufferPixelFormatTypeKey];
[videoOut setSampleBufferDelegate:self queue:_videoDataOutputQueue];
videoOut.alwaysDiscardsLateVideoFrames = NO;
if ( [_captureSession canAddOutput:videoOut] )
{
[_captureSession addOutput:videoOut];
}
_videoConnection = [videoOut connectionWithMediaType:AVMediaTypeVideo];
_videoBufferOrientation = _videoConnection.videoOrientation;
[self configureCameraForFrameRate : videoDevice];
return;
}
captureOutput:
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection (AVCaptureConnection *)connection
{
UIImage *sourceUIImage = [self imageFromSampleBuffer : sampleBuffer];
CMFormatDescriptionRef formatDescription = CMSampleBufferGetFormatDescription( sampleBuffer );
if ( self.outputVideoFormatDescription == NULL )
{
[self setupVideoPipelineWithInputFormatDescription:formatDescription];
}
else
{
processedFrameNumber++;
@synchronized( _renderer )
{
[_renderer processImageWithOpenCV : sourceUIImage : processingData];
}
}
}
At the beginning and at the end of processImageWithOpenCV are placed respectively:
timeBeforeProcess = [NSDate timeIntervalSinceReferenceDate];
and
timeAfterProcess = [NSDate timeIntervalSinceReferenceDate];
For FPS values 20, 30, 40 and 60 I measured respectively the following frame processing time values in ms 0.0140, 0.0089, 0.0074 and 0.0072.
For illustration this graph shows how the frame processing time decreases with the FPS:
Do you have any explanation?
Thank you.