14

I've seen the occasional article suggest ordering your vertices from nearest to furthest from the camera when sending them to OpenGL (for any of the OpenGL variants). The reason suggested by this is that OpenGL will not fully process/render a vertex if it is behind another vertex already rendered.

Since ordering vertices by depth is a costly component of any project, as typically this ordering frequently changes, how common or necessary is such design?

I had previously thought that OpenGL would "look" at all the vertices submitted and process its own depth buffering on them, regardless of their order, before rendering the entire batch. But if in fact a vertex gets rendered to the screen before another, then I can see how ordering might benefit performance.

Is drawing front-to-back necessary for optimizing renders?

johnbakers
  • 24,158
  • 24
  • 130
  • 258
  • Are you asking about OpenGL or OpenGL ES? Because the answers *will* differ, based on the very real hardware differences that these two platforms execute on. – Nicol Bolas Mar 28 '13 at 04:29
  • That's good to know. I'm looking at cross-platform graphics for both systems, so in that regard answers pertaining to both or either are useful. – johnbakers Mar 28 '13 at 04:37

2 Answers2

11

Once a primitive is rasterized, its z value can be used to do an "early z kill", which skips running the fragment shader. That's the main reason to render front-to-back. Tip: When you have transparent (alpha textured) polygons, you must render back-to-front.

The OpenGL spec defines a state machine and does not specify in what order the rendering actually happens, only that the results should be correct (within certain tolerances).

Edit for clarity: What I'm trying to say above is that the hardware can do whatever it wants, as long as the primitives appear to have been processed in order

However, most GPUs are streaming processors and their OpenGL drivers do not "batch up" geometry, except perhaps for performance reasons (minimum DMA size, etc). If you feed in polygon A followed by polygon B, then they are fed into the pipeline one after the other and are processed independently (for the most part) of each other. If there are a sufficient number of polys between A and B, then there's a good chance A completes before B, and if B was behind A, its fragments will be discarded via "early z kill".

Edit for clarity: What I'm trying to say above is that since hw does not "batch up" geometry, it cannot do the front-to-back ordering automatically.

Rahul Banerjee
  • 2,343
  • 15
  • 16
  • 1
    "*does not specify in what order the rendering actually happens*" This statement is very misleading. The specification is *very* clear as to the order that rendering commands are processed in. While implementations are allowed to vary the order internally, they *cannot* do anything that would make such alterations of order apparent to the user. In short, they must work *as if* they processed things in order. – Nicol Bolas Mar 28 '13 at 04:28
  • I believe his answer was referring to the order of rendering vertices, not the order of the actual rendering commands. – johnbakers Mar 28 '13 at 04:39
  • @SebbyJohanns: So was I. The OpenGL specification is *very clear* about the order in which primitives are processed. An implementation may do whatever it likes behind the scenes, but it must render everything *as if* it was processed in the order specified by the user. And that means the sequence of triangles in a single rendering command too. – Nicol Bolas Mar 28 '13 at 05:29
  • I've edited my post and clarified things (hopefully). If you have further comments, let me know. – Rahul Banerjee Mar 28 '13 at 07:33
  • In Android, you should always try to draw objects front to back to make use of early z-culling. In our case it gave significant performance increase (5-10 fps), especially if objects with simple shaders occlude objects with complex fragment shaders. Some GPUs (e.g PowerVR) do have advanced tiled-based rendering pipeline which allows you to draw objects disregarding order but on another hardware (namely, Tegra and Adreno GPUs) impact of early z-culling is significant. – keaukraine Apr 01 '13 at 14:44
  • Can somebody cite the source? I have trouble finding where exactly the OpenGL specification specifies apparent draw order of primitives. – Emperor Orionii May 27 '13 at 09:32
  • Since I don't have the resources of Pixar to create multi-thousand vertex objects, I have to use transparency, wrapping transparent background images around simpler objects. Because of this I'm forced to draw back to front or the transparent areas get blacked out. I was hoping to forgo having to actually calculate myself what objects are getting blocked, by walls for instance, and don't have to be drawn when I switched from surfaceview to glsurfaceview. glsurfaceview has rendered fast enough that all those wasted renders have been tolerable, although in principle I hate that it's doing it. – Androidcoder Jan 01 '19 at 15:58
3

You are confusing a few concepts here. There is no need to re-order vertices (*). But you should draw objects that are opaque front to back. This enables what is called "early z rejection" on the GPU. If the GPU knows that a pixel is not going to be shaded by the z test it does not have to run the shader, do texture fetches etc.. This applies to objects in draw calls though, not to individual objects.

A simple example: You have a player character and a sky background. If you draw the player first, the GPU will never have to do the texture lookups for the pixels where the player is. If you do it the other way around, you first draw all the sky and then cover it up.

Transparent geometry needs to draw back to front of course.

( * )=vertices can be re-ordered for better performance. But doing early z is much more important and done per object.

starmole
  • 4,974
  • 1
  • 28
  • 48