5

I have been researching different approaches to terrain systems in game engines for a bit now, trying to familiarize myself with the work. A number of the details seem straightforward, but I am getting hung up on a single detail.

For performance reasons many terrain solutions utilize shaders to generate parts or all of the geometry, such as vertex shaders to generate positions or tessellation shaders for LoD. At first I figured those approaches were exclusively for renders that weren't concerned about physics simulations.

The reason I say that is because as I understand shaders at the moment, the results of a shader computation generally are discarded at the end of the frame. So if you rely on shaders heavily then the geometry information will be gone before you could access it and send it off to another system (such as physics running on the CPU).

So, am I wrong about shaders? Can you store the results of them generating geometry to be accessed by other systems? Or am I forced to keep the terrain geometry on CPU and leave the shaders to the other details?

Mako_Energy
  • 352
  • 2
  • 19
  • My understanding is that a collision mesh used for physics is generally a simplified form of a skinning mesh that a terrain renderer might be using. – paddy May 18 '17 at 05:42
  • You'll want to store a collision mesh for physics calculations. When you're running complex physics calculations, you want to operate on as few points as possible. If you're using geometry calculated by a shader, it will greatly bog down your calculations. Also, shaders produced by the rendering engine will lack the necessary delta-time variable that is crucial to a fluid physics simulation. – Abstract May 18 '17 at 06:51
  • @Jon --> The time does not seem to be the problem: http://prideout.net/blog/?tag=opengl-transform-feedback – MABVT May 18 '17 at 06:59
  • you can use uniform buffer it will persist your data and also you can share it with other program. – Mandar May 18 '17 at 07:13

1 Answers1

3

Shaders

You understand parts of the shaders correctly, that is: after a frame, the data is stored as a final composed image in the backbuffer.

BUT: Using transform feedback it is possible to capture transformed geometry into a vertex buffer and reuse it. Transform Feedback happens AFTER the vertex/geometry/tessellation shader, so you could use the geometry shader to generate a terrain (or visible parts of it once), push it through transform-feedback and store it. This way, you potentially could use CPU collision detection with your terrain! You can even combine this with tessellation.

You will love this: A Framework for Real-Time, Deformable Terrain.

For the LOD and tessellation: LOD is not the prerequisite of tessellation. You can use tessellation to allow some more sophisticated effects such as adding a detail by recursive subdivision of rough geometry. Linking it with LOD is simply a very good optimization avoiding RAM-memory based LOD-mesh-levels, since you just have your "base mesh" and subdivide it (Although this will be an unsatisfying optimization imho).

Now some deeper info on GPU and CPU exclusive terrain.

GPU Generated Terrain (Procedural)

As written in the NVidia article Generating Complex Procedural Terrains Using the GPU:

1.2 Marching Cubes and the Density Function Conceptually, the terrain surface can be completely described by a single function, called the density function. For any point in 3D space (x, y, z), the function produces a single floating-point value. These values vary over space—sometimes positive, sometimes negative. If the value is positive, then that point in space is inside the solid terrain.

If the value is negative, then that point is located in empty space (such as air or water). The boundary between positive and negative values—where the density value is zero—is the surface of the terrain. It is along this surface that we wish to construct a polygonal mesh.

Using Shaders

The density function used for generating the terrain, must be available for the collision-detection shader and you have to fill an output buffer containing the collision locations, if any...

CUDA

See: https://www.youtube.com/watch?v=kYzxf3ugcg0

Here someone used CUDA, based on the NVidia article, which however implies the same: In CUDA, performing collision detection, the density function must be shared.

This will however make the transform feedback techniques a little harder to implement.

Both, Shaders and CUDA, imply resampling/recalculation of the density at at least one location, just for the collision detection of a single object.

CPU Terrain

Usually, this implies a RAM-memory stored set of geometry in the form of vertex/index-buffer pairs, which are regularly processed by the shader-pipeline. As you have the data available here, you will also most likely have a collision mesh, which is a simplified representation of your terrain, against which you perform collision.

Alternatively you could spend your terrain a set of colliders, marking the allowed paths, which is imho performed in the early PS1 Final Fantasy games (which actually don't really have a terrain in the sense we understand terrain today).

This short answer is neither extensively deep nor complete. I just tried to give you some insight into some concepts used in dozens of solutions.

Some more reading: http://prideout.net/blog/?tag=opengl-transform-feedback.

fospathi
  • 537
  • 1
  • 6
  • 7
MABVT
  • 1,350
  • 10
  • 17
  • Very insightful. I have have a fair bit of extra reading to do. CPU terrain is more what I am aiming for as the plan is to supply the geometry to something like Bullet3D or PhysX. Unless of course that turns out to be a terrible idea for some reason. – Mako_Energy May 18 '17 at 10:58