Rust has millions and millions of grass, rocks, and trees instances across its maps, which get as big as 8km.
Is that grass placed dynamically around the player at runtime? If so, is that done on the GPU somehow or using a shader?
In my game, we use vertex colors and raycasting to place vegetation, and we store transform data which gets initialized with indirect GPU instancing at runtime.
However, I can't imagine this would scale well with something like trees. Are there really thousands of mesh colliders active in the scene at all times? I thought perhaps they might store all those meshcolliders in the scene, and the gameobject could be tagged, and if you hit it with a tool, it adds a "Tree" component to it.
Am I headed in the right direction with a "spawn everything before hand, instance it at runtime" approach?
I've tested this and it actually worked, spawning 24 million instances (took 20 minutes to raycast), and then initializing the GPU with the instances.
This is cool and all, even though it lead my Unity editor to crash after a little while (memory leak?).
Maybe you store the instances before runtime, and then when you start the decicated server you do all the raycasting and place all the trees, rocks, and other interactive objects.
But I am worried that if I tried to store even 10000 gameobjects (for interaction, stuff like choppable trees, mineable rocks) that performance would tank.