2

My company is developing an Augmented Reality app for a client, using ARKit, and they want the best world tracking experience on iOS. We have told them that this is not an exact science and that small variations are perfectly normal, but they want us to do anything possible to minimize errors, drifts and movements from holograms.

We have tested some ARHitTestResult examples on Unity and some others that use ARImageAnchor, but we can't decide on which is the best. It seems that using an ARImageAnchor improves tracking, but I am not sure if that is really the case or if this is just an optical illusion.

What's the best – ARImageAnchor vs plain ARAnchor?

Please advise or share any Unity/Apple documentation on this matter.

Thanks.

Andy Jazz
  • 49,178
  • 17
  • 136
  • 220
aviggiano
  • 1,204
  • 17
  • 23

2 Answers2

4

Any anchor in ARKit (ARFaceAnchor, ARImageAnchor, ARPlaneAnchor, etc) inherits from ARAnchor class and, in some cases, from ARTrackable protocol.

Every anchor in ARKit has its special purpose (for instance, ARPlaneAnchor is a special version of ARAnchor that designated for plane detection process). I don't think that one anchor is more precise than another.

So all you need to get a robust tracking result is a good lighting condition, distinguishable 3D surfaces and high-contrast textures on them. Also pre-saved ARWorldMap is a good point for a persistent AR experience.

Don't use repetitive texture/objects patterns and surfaces with solid colours when tracking your scene. Also, for best tracking results don't track even slightly moving objects. You need only static environment.

And I should say that all Apple devices are good calibrated for tenable and precise AR experience.


P.S.

Some tips about ARWorldTrackingConfiguration() and ARImageTrackingConfiguration().

If you enable image recognition option in 6 DOF ARWorldTrackingConfiguration() you'll get ARImageAnchor objects (for each detected image) that's just an information about the position and orientation of images detected in a world-tracking AR session. That doesn't improve a precision of world tracking but that significantly slows down a processing speed.

guard let refImgs = ARReferenceImage.referenceImages(inGroupNamed: "ARGroup", 
                                                           bundle: nil) else {
    fatalError("Missing expected resources.")
}

let configuration = ARWorldTrackingConfiguration()
configuration.detectionImages = refImgs
configuration.maximumNumberOfTrackedImages = 3
session.run(configuration, options: [.resetTracking, 
                                     .removeExistingAnchors])

A world-tracking session with image tracking enabled can simultaneously track only a small number of images. You can track more images with ARImageTrackingConfiguration. But image detection accuracy and performance are considerably reduced with larger numbers of detection images. For best results, use no more than around 20-25 images in set.

ARImageTrackingConfiguration():

With ARImageTrackingConfiguration, ARKit establishes a 3D space not by tracking the motion of the device relative to the world, but solely by detecting and tracking the motion of known 2D images in view of the camera. Image-only tracking lets you anchor virtual content to known images only when those images are in view of the camera. World tracking with image detection lets you use known images to add virtual content to the 3D world, and continues to track the position of that content in world space even after the image is no longer in view. World tracking works best in a stable, nonmoving environment. You can use image-only tracking to add virtual content to known images in more situations — for example, an advertisement inside a moving subway car.

Conclusion: using ARImageAnchors in your scene doesn't add the extra layer of quality for World Tracking Results. Check Recognizing Images in an AR Experience article for detailed info.

Andy Jazz
  • 49,178
  • 17
  • 136
  • 220
0

Adding to what @andy-jazz posted: Use ARAnchor You should only use an ARImageAnchor when you actually want to trigger an experience from an image. Otherwise using them might even be detrimental as the initially recognized position and rotation can be slightly off especially with extreme viewing angles. This will get adjusted when you you are facing your image more directly, causing your content to drift if it is parented to this image.

Using an image anchor also leads to the usability limitation of users having to have the image present to use your app.

Tip: If you have any influence on what iOS device the client will use with this app make sure it has LIDAR. In my experience this leads to faster and more consistent tracking especially in low light conditions.

ephb
  • 594
  • 1
  • 15