RealityKit has a bunch of useful functionality like built-in multiuser synchronization over a network to support shared worlds, but I can’t seem to find much documentation regarding mesh / object creation at runtime. RealityKit has some basic mesh generation functions (box, sphere, etc.) but I’d like to create my own procedural meshes at runtime (vertices and indices), and likely regenerate them every frame immediate-mode rendering style.
Firstly, is there a way to do this, or is RealityKit too closed-in without a way to do much custom rendering? Secondly, would there be an alternative solution that might let me use some of RealityKit’s synchronization? For example, is that part really just another library I can use with ARKit 3? What is it called? I’d like to be able to synchronize arbitrary data between users’ devices as well, so the built-in system would be helpful as well.
I can’t really test this because I don’t have any devices that can support the beta software at the moment. I am trying to learn whether I’ll be able to do what I want for my program(s) if I do get the necessary hardware, but the documentation is sparse.