1

I am currently tracking faces using ARKit. ARKit knows the world coordinates of the face and the user's eyes via leftEyeTransform and rightEyeTransform.

I can get the live pixelbuffer via

        faceNode.transform = node.transform
        guard let faceAnchor = anchor as? ARFaceAnchor else { return }
        update(withFaceAnchor: faceAnchor)
        
        let frame = sceneView.session.currentFrame
        let ciimage:CIImage = CIImage(cvPixelBuffer: frame!.capturedImage)
        let context:CIContext = CIContext(options: nil)
        let cgImage:CGImage = context.createCGImage(ciimage, from: ciimage.extent)!
        let myImage:UIImage = UIImage(cgImage: cgImage)

How do I combine frame.capturedImage with ARKit's internal knowledge of where the user's eyes are, nose, etc. and create a cropped images of the user's eyes?

DevDevDev
  • 5,107
  • 7
  • 55
  • 87

0 Answers0