14

I am writing an app that does some real-time video processing using an AVCaptureSession with a AVCaptureVideoDataOutput as output and an AVCaptureDeviceInput with the video file (it no longer needs to be in real-time) as input.

Is it possible to use the video file as in input to the AVCaptureSession instead of the camera? If it is not possible, what is the best method to process a video file using video capture of opencv on iOS (either simultaneously or sequentially)?

Pk boss
  • 274
  • 1
  • 13
kunal
  • 141
  • 5

3 Answers3

1

Since you have access to the raw video file frames (from your AVCaptureVideoDataOutput), you can convert each frame to a cv::Mat object (a opencv matrix, representing an image). Then do your image processing on each individual frame.

Check out https://developer.apple.com/library/ios/qa/qa1702/_index.html for a real time example using the camera; you can convert your UIImage to a cv::Mat using cvMatFromUIImage.

Kevin Le
  • 846
  • 8
  • 17
1

So it turns out it's not too difficult to do. The basic outline is:

  1. Create a cv::VideoCapture to read from a file
  2. Create a CALayer to receive and display each frame.
  3. Run a method at a given rate that reads and processes each frame.
  4. Once done processing, convert each cv::Mat to a CGImageRef and display it on the CALayer.

The actual implementation is as follows:

Step 1: Create cv::VideoCapture

std::string filename = "/Path/To/Video/File";
capture = cv::VideoCapture(filename);
if(!capture.isOpened()) NSLog(@"Could not open file.mov");

Step 2: Create the Output CALayer

self.previewLayer = [CALayer layer];
self.previewLayer.frame = CGRectMake(0, 0, width, height);
[self.view.layer addSublayer:self.previewLayer];

Step 3: Create Processing Loop w/ GCD

int kFPS = 30;

dispatch_queue_t queue = dispatch_queue_create("timer", 0);
self.timer = dispatch_source_create(DISPATCH_SOURCE_TYPE_TIMER, 0, 0, queue);
dispatch_source_set_timer(self.timer, dispatch_walltime(NULL, 0), (1/kFPS) * NSEC_PER_SEC, (0.5/kFPS) * NSEC_PER_SEC);

dispatch_source_set_event_handler(self.timer, ^{
    dispatch_async(dispatch_get_main_queue(), ^{
        [self processNextFrame];
    });
});

dispatch_resume(self.timer);

Step 4: Processing Method

-(void)processNextFrame {
    /* Read */
    cv::Mat frame;
    capture.read(frame);

    /* Process */
    ...

    /* Convert and Output to CALayer*/
    cvtColor(frame, frame, CV_BGR2RGB);
    NSData *data = [NSData dataWithBytes:frame.data
                              length:frame.elemSize()*frame.total()];

    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
    CGBitmapInfo bitmapInfo = (frame.elemSize() == 3) ? kCGImageAlphaNone : kCGImageAlphaNoneSkipFirst;
    CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef) data);

    CGImageRef imageRef = CGImageCreate(frame.cols,
                                        frame.rows,
                                        8,
                                        8 * frame.elemSize(),
                                        frame.step[0],
                                        colorSpace,
                                        bitmapInfo,
                                        provider,
                                        NULL,
                                        false,
                                        kCGRenderingIntentDefault);

    self.previewLayer.contents = (__bridge id)imageRef;

    CGImageRelease(imageRef);
    CGColorSpaceRelease(colorSpace);
}
pasawaya
  • 11,515
  • 7
  • 53
  • 92
1

I implemented the pasawaya solution ... the previewLayer was not refreshed ... I found where the problem came from :

In Step 4, replace :

self.previewLayer.contents = (__bridge id)imageRef;

With :

[self performSelectorOnMainThread:@selector(displayFrame:) withObject:(__bridge id)imageRef waitUntilDone:YES];

And add :

- (void)displayFrame:(CGImageRef)frame {
    _previewLayer.contents = (__bridge id)frame;
    [CATransaction flush];
}

Hope this help !

OldNick
  • 11
  • 2