4

I have an AR app where the view is constantly showing what the back camera is seeing and sending each frame for analysis to VisionRequest.

When the object was identified, I would like to capture that particular last frame and save it as a regular UIImage and send it down the segue chain to the final view controller where I display that last frame. I have issues capturing that last frame and showing it.

Here is what I tried so far:

When the image is recognized with a high-enough confidence, I attempt to retrieve the current last frame from the CVPixelBuffer and save it in a local variable that is later passed in a segue to subsequent view controllers.

Is this the correct way of doing it? or do I have to add a second output to the session (a photo output in addition to a video data output) ?

 //attempting to get the current last frame of captured video
let attachments = CMCopyDictionaryOfAttachments(allocator: kCFAllocatorDefault, target: self.currentlyAnalyzedPixelBuffer!, attachmentMode: kCMAttachmentMode_ShouldPropagate)

let ciImage = CIImage(cvImageBuffer: self.currentlyAnalyzedPixelBuffer!, options: attachments as? [CIImageOption : Any])

self.image = UIImage(ciImage: ciImage)
Anand
  • 1,820
  • 2
  • 18
  • 25
Ayrad
  • 3,996
  • 8
  • 45
  • 86

2 Answers2

4

Actually, there are more chances that you get not exact output you needed. Because You never know that last frame captured has exact same you wanted. There might be possibilities where you can have false results like the camera is in motion and frame you got is blurred or not properly as per your need.

May be I am wrong with it. But my suggestion or solution would  keep array of 10 images or pixel buffers and store last 10 Frames or pixel buffers. When you get your object identified from vision check that array again and get the highest quality (confidence) frame or you may show the user a collection view as an option to choose the correct image.

Hope it may helpful

Prashant Tukadiya
  • 15,838
  • 4
  • 62
  • 98
1

The current last frame may not be the one that triggered the successful image recognition, so you may want to hold to the pixelBuffer that triggered it.

Then you can get the UIImage from the pixelBuffer like so:

import VideoToolbox

var cgImage: CGImage?
VTCreateCGImageFromCVPixelBuffer(matchingPixelBuffer, options: nil, imageOut: &cgImage)
let uiImage = UIImage(cgImage: cgImage)
atineoSE
  • 3,597
  • 4
  • 27
  • 31