0

I am capturing live video frames on iPhone and processing them to track in real-time color blobs using OpenCV. In an effort to minimize the processing time I discovered that the frame pure processing time depends on the preset FPS rate! I would have expected the processing time of an image within an OpenCV module to depend only on the image size and the OpenCV algorithm itself, wouldn't I?

setupCaptureSession:

- (void)setupCaptureSession
{
  if ( _captureSession )
  {
     return;
  }
  _captureSession = [[AVCaptureSession alloc] init];
  videoPreviewLayer = [AVCaptureVideoPreviewLayer layerWithSession:_captureSession];
  AVCaptureDevice *videoDevice = [self frontCamera];
  NSError *videoDeviceError = nil;
  AVCaptureDeviceInput *videoIn = [[AVCaptureDeviceInput alloc] initWithDevice:videoDevice error:&videoDeviceError];
  if ( [_captureSession canAddInput:videoIn] )
  {
    [_captureSession addInput:videoIn];
    _videoDevice = videoDevice;
  }
  else
  {
    [self handleNonRecoverableCaptureSessionRuntimeError:videoDeviceError];
    return;
  }
  AVCaptureVideoDataOutput *videoOut = [[AVCaptureVideoDataOutput alloc] init];
  videoOut.videoSettings = [NSDictionary dictionaryWithObject:
                          [NSNumber numberWithInt:kCVPixelFormatType_32BGRA] forKey:(id)kCVPixelBufferPixelFormatTypeKey];
  [videoOut setSampleBufferDelegate:self queue:_videoDataOutputQueue];
  videoOut.alwaysDiscardsLateVideoFrames = NO;
  if ( [_captureSession canAddOutput:videoOut] )
  {
     [_captureSession addOutput:videoOut];
  }
  _videoConnection = [videoOut connectionWithMediaType:AVMediaTypeVideo];
  _videoBufferOrientation = _videoConnection.videoOrientation;
  [self configureCameraForFrameRate :  videoDevice];
  return;
}

captureOutput:

- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection (AVCaptureConnection *)connection
  {
    UIImage *sourceUIImage = [self imageFromSampleBuffer : sampleBuffer];
    CMFormatDescriptionRef formatDescription = CMSampleBufferGetFormatDescription( sampleBuffer );
    if ( self.outputVideoFormatDescription == NULL )
    {
       [self setupVideoPipelineWithInputFormatDescription:formatDescription];
    }
    else
    {
       processedFrameNumber++;
       @synchronized( _renderer )
       {
           [_renderer processImageWithOpenCV : sourceUIImage : processingData];
       }
    }
}

At the beginning and at the end of processImageWithOpenCV are placed respectively:

timeBeforeProcess = [NSDate timeIntervalSinceReferenceDate];

and

timeAfterProcess = [NSDate timeIntervalSinceReferenceDate];

For FPS values 20, 30, 40 and 60 I measured respectively the following frame processing time values in ms 0.0140, 0.0089, 0.0074 and 0.0072.

For illustration this graph shows how the frame processing time decreases with the FPS:

graph

Do you have any explanation?

Thank you.

Patrick
  • 1,717
  • 7
  • 21
  • 28

1 Answers1

0

Could iOS be scaling resources to preserve battery / reduced heat? The low frequency might be adequate with less cpu/gpu/..., but not the higher frequency ones. A number of arm vendors played with ‘big + little’ architectures.

There might be a log/perf thing you can query, but if not, can you try newer and older iphones? I would suspect that an iphone 8 or 9 is more aggressive than a 5 or 6. Be aware that the fix for sudden battery discharge might interfere with your experiment on these devices.

mevets
  • 10,070
  • 1
  • 21
  • 33
  • Sorry for late reply, I am on a trekking trip, I do not have Internet connection every day. Thank you mevets for your suggestions; for the battery change, most of my app tests are with the iPhone connected to my MacBook ; so the battery change effect should be minimal. I am using iPhoe X by the way. – Brian Scherady May 04 '18 at 06:56