I'm trying to implement Detecting Hand Poses with Vision in ARSCNView
. I've successfully made the VNDetectHumanHandPoseRequest
and get detected hand poses result and convert vision coordinates to AVFoundation coordinates but now I want to convert that AVFoundation
coordinates to UIKit
coordinates to visualization that detect point.
I've reviewed that the apple provided demo example for the Detecting Hand Poses and see the logic for converting AVFoundation coordinates to UIKit coordinates through AVCaptureVideoPreviewLayer
class captureDevicePointConverted
method.
Here is the code snippet:
let previewLayer = cameraView.previewLayer
let thumbPointConverted = previewLayer.layerPointConverted(fromCaptureDevicePoint: thumbPoint)
I want to know how I can convert points using ARSCNView
because there is no preview layer found in the ARCamera
.