16

I couldn't find any information if Apple's ARKit supports 3D object tracking (or even image tracking) like Vuforia does. I don't want to place a 3D model just anywhere in the world. Instead I want to detect a specific 3D object and place AR objects in front and on top of that object.

Simple example: I want to track a specific toy car in the real world and add a spoiler on top of it in the AR scene.

Can someone provide me information wether that is possible or not?

Andy Jazz
  • 49,178
  • 17
  • 136
  • 220
Superwayne
  • 1,146
  • 1
  • 12
  • 22

3 Answers3

30

Update for iOS 12: In "ARKit 2" (aka ARKit on iOS 12 or later)...

  • Image detection is extended to image tracking, so up to four images don't just get detected once, they get updated "live" every frame even if they're moving relative to world space. So you can attach a recognizable 2D image to your toy, and have virtual AR content follow the toy around on-screen.

  • There's also object detection — in your development process you can use one ARKit app to scan a real-world 3D object and produce a "reference object" file. Then you can ship that file in your app and use it to recognize that object in the user's environment. This might fit your "toy car" case... but be aware that the 3D object recognition feature is detection, not tracking: ARKit won't follow the toy car as it moves.

See the WWDC18 talk on ARKit 2 for details.


Update for iOS 11.3: In "ARKit 1.5" (aka ARKit on iOS 11.3 or later), there's a new image detection feature in ARKit. If you have a known image (like a poster or playing card or some such), you can include it in your Xcode project and/or load it from elsewhere as an ARReferenceImage and put it in your session configuration's detectionImages array. Then, when ARKit finds those images in the user environment, it gives you ARImageAnchor objects telling you where they are.

Note that this isn't quite like the "marker-based AR" you see from some other toolkits — ARKit finds a reference image only once, it doesn't tell you how it's moving over time. So it's good for "triggering" AR content experiences (like those promos where you point your phone at a Star Wars poster in a store and a character walks out of it), but not for, say, AR board games where virtual characters stay attached to game pieces.


Otherwise...

It is possible to access the camera image in each captured ARFrame, so if you have other software that can help with such tasks you could use them in conjunction with ARKit. For example, the Vision framework (also new in iOS 11) offers several of the building blocks for such tasks — you can detect barcodes and find their four corners, and after manually identifying a region of interest in an image, track its movement between frames.

rickster
  • 124,678
  • 26
  • 272
  • 326
  • So does this mean that you can find an object, wait for some period, find the object again and then get the distance allowing you to calculate the speed and/or the direction of the object? I would love to see an app which could tell you how fast an object is moving. Living close to a road sometimes I see cars going far to speedy... – Netsi1964 Aug 13 '17 at 08:21
3

Note: this is definitely a hack, but it adds persistent image tracking to ARKit Unity. Same idea can be applied to the Native lib as well.

Download ARKit 1.5 beta https://bitbucket.org/Unity-Technologies/unity-arkit-plugin/branch/spring2018_update

In ARSessionNative.mm, add this block of code:

extern "C" void SessionRemoveAllAnchors(void* nativeSession) {
    UnityARSession* session = (__bridge UnityARSession*)nativeSession;
    for (ARAnchor* a in session->_session.currentFrame.anchors)
    {
        [session->_session removeAnchor:a];
        return;
    }
}

In UnityARSessionNativeInterface.cs, add this code under SessionRemoveUserAnchor:

private static extern void SessionRemoveAllAnchors (IntPtr nativeSession);

And this under RemoveUserAnchor:

public void RemoveAllAnchors() {
        #if !UNITY_EDITOR

        SessionRemoveAllAnchors(m_NativeARSession);
        #endif
    }

Then call this from an Update or Coroutine:

UnityARSessionNativeInterface.GetARSessionNativeInterface().RemoveAllAnchors ();

When the anchor is removed, the image can be recognized once again. It's not super smooth but it definitely works.

Hope you found this useful! Let me know if you need further assistance.

Aidan Wolf
  • 131
  • 4
3

ARKit 2.0 for iOS 12 supports not only Camera Tracking feature but also:

  • 3D Object Tracking
  • Face Tracking
  • Image Tracking
  • Image Detection
  • 3D Object Scanning

ARKit 3.0 for iOS 13 supports even more sophisticated features:

  • People Occlusion (RGBAZ realtime compositing)
  • Body Tracking (a.k.a. Motion Capture)
  • Multiple Face Tracking (up to 3 faces)
  • Simultaneous Front and Back Camera Tracking
  • Collaborative Sessions
Andy Jazz
  • 49,178
  • 17
  • 136
  • 220
  • Hi ARGeo, i am newbie to Arkit. i want to detect an object (table, bottle or special shape or anything) from camera/Gallery image with ARKit. I see some example which provides predefined image in the project and then it can detect whether the image exactly exists. But i wont do this and i want it make learn so that it could detect any table/bottle ? Do i need to user Core ML with AirKit ? Any libraries/help for Swift ? Thanks. – Jamshed Alam Jul 17 '19 at 04:22
  • @JamshedAlam, publish it as a question, please. Comments are not for this. – Andy Jazz Jul 18 '19 at 11:59
  • 1
    I get my answer after studying some question and blogs. I will post if i get any confusion further. Thanks anyway. – Jamshed Alam Jul 22 '19 at 02:08