The principle of operation is as simple as that:
Reality Composer's scene element called AnchorEntity
contained in .rcproject
file in RealityKit app conforms to HasAnchoring protocol. When RealityKit app's Artificial Intelligence sees any image thru rear camera, it compares it with the one containing inside reference image folder. If both images are identical, app creates an image-based anchor AnchorEntity
(similar to ARImageAnchor in ARKit) that tethers its corresponding 3D model. Invisible anchor appears in the center of a picture.
AnchorEntity(.image(group: "ARResourceGroup", name: "imageBasedAnchor"))
When you're using image-based anchors in RealityKit apps, you're using a RealityKit's analog of ARImageTrackingConfig that is less processor intensive than ARWorldTrackingConfig.
The difference between AnchorEntity(.image)
and ARImageAnchor
is that RealityKit automatically tracks all its anchors, while ARKit uses renderer(...)
or session(...)
methods for updating.