2

I'd like to implement an scenario like this: to use a camera live stream with Vision to detect some rectangles, then to process this output according to some logic, then to display AR elements according to the logic output with ARKit.

The examples I've found don't cover this full process from Vision live stream detection to ARKit with SpriteKit, but for those "steps" separatedly. The one I found for live stream with Vision is using a UIImageView for that. Other ones I found for ARKit with SpriteKit are using an ARSKView for that.

Which the best way to integrate all this in a stepped process Vision -> logic -> ARKit would be?

Andy Jazz
  • 49,178
  • 17
  • 136
  • 220
AppsDev
  • 12,319
  • 23
  • 93
  • 186

0 Answers0