7

I am trying to do real time image processing with an iPhone 6 at 240fps. The problem is when I capture video at that speed, I can't process the image fast enough since I need to sample each pixel to get an average. Reducing the image resolution would easily solve this problem, but I'm not able to figure out how to do this. The available AVCaptureDeviceFormat's have options with 192x144 px, but at 30fps. All 240fps options all have larger dimensions. Here is how I am sampling the data:

- (void)startDetection
{
    const int FRAMES_PER_SECOND = 240;
    self.session = [[AVCaptureSession alloc] init];
    self.session.sessionPreset = AVCaptureSessionPresetLow;

    // Retrieve the back camera
    NSArray *devices = [AVCaptureDevice devices];
    AVCaptureDevice *captureDevice;
    for (AVCaptureDevice *device in devices)
    {
        if ([device hasMediaType:AVMediaTypeVideo])
        {
            if (device.position == AVCaptureDevicePositionBack)
            {
                captureDevice = device;
                break;
            }
        }
    }

    NSError *error;
    AVCaptureDeviceInput *input = [[AVCaptureDeviceInput alloc] initWithDevice:captureDevice error:&error];
    [self.session addInput:input];

    if (error)
    {
        NSLog(@"%@", error);
    }

    // Find the max frame rate we can get from the given device
    AVCaptureDeviceFormat *currentFormat;
    for (AVCaptureDeviceFormat *format in captureDevice.formats)
    {
        NSArray *ranges = format.videoSupportedFrameRateRanges;
        AVFrameRateRange *frameRates = ranges[0];

        // Find the lowest resolution format at the frame rate we want.
        if (frameRates.maxFrameRate == FRAMES_PER_SECOND && (!currentFormat || (CMVideoFormatDescriptionGetDimensions(format.formatDescription).width < CMVideoFormatDescriptionGetDimensions(currentFormat.formatDescription).width && CMVideoFormatDescriptionGetDimensions(format.formatDescription).height < CMVideoFormatDescriptionGetDimensions(currentFormat.formatDescription).height)))
        {
            currentFormat = format;
        }
    }

    // Tell the device to use the max frame rate.
    [captureDevice lockForConfiguration:nil];
    captureDevice.torchMode=AVCaptureTorchModeOn;
    captureDevice.activeFormat = currentFormat;
    captureDevice.activeVideoMinFrameDuration = CMTimeMake(1, FRAMES_PER_SECOND);
    captureDevice.activeVideoMaxFrameDuration = CMTimeMake(1, FRAMES_PER_SECOND);
    [captureDevice setVideoZoomFactor:4];
    [captureDevice unlockForConfiguration];

    // Set the output
    AVCaptureVideoDataOutput* videoOutput = [[AVCaptureVideoDataOutput alloc] init];

    // create a queue to run the capture on
    dispatch_queue_t captureQueue=dispatch_queue_create("catpureQueue", NULL);

    // setup our delegate
    [videoOutput setSampleBufferDelegate:self queue:captureQueue];

    // configure the pixel format
    videoOutput.videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:[NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA], (id)kCVPixelBufferPixelFormatTypeKey,
                                 nil];
    videoOutput.alwaysDiscardsLateVideoFrames = NO;
    [self.session addOutput:videoOutput];

    // Start the video session
    [self.session startRunning];
}
lehn0058
  • 19,977
  • 15
  • 69
  • 109
  • 2
    I don't think you can force 240fps on a device, it is the max, and simultaneously images are processed, if processing is heavy, frame per second will drop down, I suggest you use `imageFromSampleBuffer` function from Apple's sample code to get an UIImage, which is resized later before processed, this should speed up image processing. – gabbler Mar 15 '15 at 09:55
  • Interesting idea. I hadn't thought of re-sizing the image after it had already been retrieved, but before I do my analysis. I will give that a try and see what performance impact that has. Hopefully the image resize would be graphics card operation, so CPU won't be affected much. – lehn0058 Mar 16 '15 at 20:17
  • 1
    That's what I do, I resize with CG when I retrieve from the pixel buffer, because I want 640x480@60fps and Apple doesn't support it. Curious, why you want to do image processing at that inhumane speed? – aledalgrande Mar 18 '15 at 21:21
  • How do you resize with GC? I am trying to improve a heart rate detection algorithm that uses the iPhone 6's camera. Most medical devices seem to record between 180 and 220 fps. 240 fps should make the data resolution on par with those devices. – lehn0058 Mar 19 '15 at 11:34
  • I've tried to do the same but I gave up, it seems impossible to catch from output buffer at high fps http://stackoverflow.com/questions/20738563/nsoperationqueue-concurrent-operation-and-thread – Andrea Mar 20 '15 at 20:49

1 Answers1

0

Try GPUImage library. Each filter has method forceProcessingAtSize:. After force resize on GPU, you can retrieve data with GPUImageRawDataOutput.

I got 60fps with process image on CPU with this method.

Gralex
  • 4,285
  • 7
  • 26
  • 47