1

I'm trying to implement Detecting Hand Poses with Vision in ARSCNView. I've successfully made the VNDetectHumanHandPoseRequest and get detected hand poses result and convert vision coordinates to AVFoundation coordinates but now I want to convert that AVFoundation coordinates to UIKit coordinates to visualization that detect point.

I've reviewed that the apple provided demo example for the Detecting Hand Poses and see the logic for converting AVFoundation coordinates to UIKit coordinates through AVCaptureVideoPreviewLayer class captureDevicePointConverted method.

Here is the code snippet:

let previewLayer = cameraView.previewLayer
let thumbPointConverted = previewLayer.layerPointConverted(fromCaptureDevicePoint: thumbPoint)

I want to know how I can convert points using ARSCNView because there is no preview layer found in the ARCamera.

Andy Jazz
  • 49,178
  • 17
  • 136
  • 220
TechGps1
  • 21
  • 3
  • 1
    This might be helpful: https://stackoverflow.com/a/66054211/14351818 – aheze Jul 07 '21 at 06:44
  • @aheze Thanks for the replay but I don't have a bounding box because VNDetectHumanHandPoseRequest does not provide a bounding box as result. here is the logic I implemented and what I want. check this https://prnt.sc/1990aez – TechGps1 Jul 07 '21 at 10:54

1 Answers1

0

I found a solution on Github, the person uses :

VNImagePointForNormalizedPoint(CGPoint(x: $0.location.y, y: $0.location.x), Int(self.arView.bounds.width), Int(self.arView.bounds.height))

$0 represents each of the recognized points.

Here is the link to the code: https://github.com/john-rocky/RealityKit-Sampler/blob/main/RealityKitSampler/UIKitViews/HandInteractionARViewController.swift

mvlcfr
  • 1
  • 6