0

I'd like to use Vision framework and then a Core ML model to detect some things with the camera. I have found some examples for this first, let's say, "step" (Vision -> Core ML), but then I'd like to use ARKit framework to represent some other things depending on what was detected with Vision and the Core ML model.

I have found a post here asking about the flow ARKit -> Vision -> Core ML, but what I'd like to do would be Vision -> Core ML -> ARKit.

Could somebody provide me with an example or tutorial or some guidelines for that scenario?

AppsDev
  • 12,319
  • 23
  • 93
  • 186
  • I don't understand what's different about this question from the one you link. They're accomplishing exactly what you describe: detecting objects in camera frames via Vision and Core ML, then placing things in an ARKit scene based on that. You hook these up in the order that makes sense to allow this to happen. – Brad Larson Jul 27 '17 at 15:39
  • @BradLarson question there mentions capturing ARFrames first, then passing them to Vision ans CoreML. I want to use Vision and CoreML first, then pass something I detected to ARKit – AppsDev Jul 30 '17 at 18:08

0 Answers0