I'm prototyping an app where I use CoreML to identify an object. That gives me a bounding box for the object (which has 4 values all between 0 and 1). I'd like to use the ARDepthData I have access to thanks to having a phone with LiDAR to then measure the distance to that object.
The CVPixelBuffer of sceneview.session.currentFrame?.capturedImage
has dimensions 1920 x 1440. The CVPixelBuffer of sceneview.session.currentFrame?.sceneDepth.depthMap
has dimensions 256 x 192.
How do I convert the bounding box of the VNRecognizedObjectObjservation object to give me the depth data I need to estimate the distance to the object?