I'd like to use Vision
framework and then a Core ML
model to detect some things with the camera. I have found some examples for this first, let's say, "step" (Vision
-> Core ML
), but then I'd like to use ARKit
framework to represent some other things depending on what was detected with Vision
and the Core ML
model.
I have found a post here asking about the flow ARKit
-> Vision
-> Core ML
, but what I'd like to do would be Vision
-> Core ML
-> ARKit
.
Could somebody provide me with an example or tutorial or some guidelines for that scenario?