3

RealityKit has a bunch of useful functionality like built-in multiuser synchronization over a network to support shared worlds, but I can’t seem to find much documentation regarding mesh / object creation at runtime. RealityKit has some basic mesh generation functions (box, sphere, etc.) but I’d like to create my own procedural meshes at runtime (vertices and indices), and likely regenerate them every frame immediate-mode rendering style.

Firstly, is there a way to do this, or is RealityKit too closed-in without a way to do much custom rendering? Secondly, would there be an alternative solution that might let me use some of RealityKit’s synchronization? For example, is that part really just another library I can use with ARKit 3? What is it called? I’d like to be able to synchronize arbitrary data between users’ devices as well, so the built-in system would be helpful as well.

I can’t really test this because I don’t have any devices that can support the beta software at the moment. I am trying to learn whether I’ll be able to do what I want for my program(s) if I do get the necessary hardware, but the documentation is sparse.

synchronizer
  • 1,955
  • 1
  • 14
  • 37

2 Answers2

3

Feb 2022

As of macOS 12 / iOS 15, RealityKit includes API to allow you to provide your own procedurally generated meshes, primarily through the following methods:

These provide means to do create the MeshResource instances - synchronously and asynchronously - either constructing the models and instances yourself, or by providing a list of MeshDescriptor that you create yourself.

The Apple documentation (as I'm writing this) is non-existent, but the APIs themselves are reasonably well documented if you look into the generated swift interfaces. Max Cobb has an article (on Medium): Getting Started with RealityKit: Procedural Geometries that goes into some description of how to use a MeshDescriptor to describe a surface mesh, and also has a swift package with some additional geometries that use this technique: RealityGeometries that's not hard to read through to see examples of using it in action.

heckj
  • 7,136
  • 3
  • 39
  • 50
1

As far as I know RealityKit can only use primitives or usdz files as models. While you can generate usdz files using ModelIO on device but that isn't feasible for your use case.

The synchronization however is built into ARKit although you have to do a little bit more work when you are not using RealityKit.

  1. Create a MultipeerConnectivity session between the devices (that's something you need to to for RealityKit as well)
  2. Configure your ARSession and set isCollborationEnabled which makes your session output CollaborationData in the session(_:didOutputCollaborationData:) delegate callback.
  3. Send this data using your MultipeerConnectivity session.
  4. When receiving data from other users integrate it into your session using update(with:)

To send arbitrary information between users you can either send them via MultipeerConnectivity independently from ARKit or use custom ARAnchors, which is the preferred option when your dealing with positional data, e.g. when a users has placed an object at a specific location.
Instead of adding objects directly (by using something like scene.rootNode.addChildNode() in SceneKit you create a special ARAnchor subclass with all the information needed to add your model and add it to your session. Then you add the object in the rendered(_:didAdd:forAnchor:) callback. This has the benefits of better tracking around your object (because you added an anchor to the position, indicating to ARKit that it should remember the position) and that you don't need to do anything special for multiuser experiences, because ARKit calls the rendered(_:didAdd:forAnchor:) method for both manually added anchors as well as automatically added ones, for example when it receives collaboration data.

jlsiewert
  • 3,494
  • 18
  • 41
  • Hm, so it sounds like it makes more sense to use ARKit 3 in tandem with other APIs. This is a lot to take in for someone just starting. Are there any code examples available that would show both the AR and arbitrary data synchronization? Also, one of the things that was appealing about RealityKit was the object occlusion, which is implemented as a material. To your knowledge, can I still use that feature? My understanding was that it was RealityKit only though. For rendering it seems I need to do my own Metal system, which I was hoping to avoid. – synchronizer Aug 09 '19 at 13:16
  • —and if I need to use my own Metal renderer and create dynamic meshes/buffers, wouldn’t it make more sense not to use ARAnchors? – synchronizer Aug 09 '19 at 13:27
  • EDIT: I guess this solves the occlusion bit: https://developer.apple.com/documentation/arkit/effecting_people_occlusion_in_custom_renderers but it doesn’t show how doable it is to use people occlusion with a custom renderer. it would still be good to see a minimum example of dealing with arbitrary data and rendering. – synchronizer Aug 09 '19 at 13:36
  • I've read that MultipeerConnectivity was unstable in the past. https://www.reddit.com/r/iOSProgramming/comments/40tllq/question_does_anyone_know_how_to_sync_data/ Has this been improved? Is it fast enough to stream data (like from the gyroscope) in real-time? – synchronizer Aug 16 '19 at 02:05
  • 2
    It's fine. You can take a look at [this session from last year's WWDC](https://developer.apple.com/videos/play/wwdc2018/605/) for an AR game built entirely on top of MultipeerConnectivity. – jlsiewert Aug 18 '19 at 09:30