5

I'm making a 2D game using OpenGL. I want to do the drawing like, first I copy vertex data of all objects I want to draw into VBOs (one VBO per texture/shader), then draw each VBO in separate draw call. It seemed like a good idea, until I realized it will mess up drawing order - the draw calls won't necessarily be in order the objects were loaded into VBOs. I thought of using a depth buffer to sort items - every new object to draw will have slightly higher Z position. The question is, how much should I increment it to not run into any problems? AFAIK, there can be two kinds of problems - if I make it too large, then I will have limited number of objects I can draw in a single frame, and if I make it too small, the precision loss of the depth buffer might make overlapping images be drawn in wrong order. To summarize:

1) What should be front and back values of my orthographic projection? 0 to 1? -1 to 1? 1 to 2? Does it matter?

2) If I use 's nextafter() for incrementing Z position, what kind of trouble can I run into? How does OpenGL and depth buffer react to subnormal floats? If I started with std::numeric_limits::min(), and ended at 1, is there anything else I should worry about?

Xirdus
  • 2,997
  • 6
  • 28
  • 36
  • The values of `nearVal` and `farVal` very much matter. By default, the depth range is [**0**,**1**] and the depth buffer is fixed-point, so if you used `nearVal=0.0` and `farVal=1.0` for instance, then the smallest depth you could distinguish would be **1.0/(256.0*256.0)** (16-bit). That is a terribly awkward number and subject to floating-point shenanigans, so I suggest that instead you set your `farVal` such that the depth buffer is divided into integers. That means you want the range between `nearVal` and `farVal` to equal the number of integer values your depth buffer can represent. – Andon M. Coleman Jun 05 '14 at 22:29
  • you'll need to manually sort if you use blending, watch out for that one – paulm Jun 05 '14 at 22:30
  • @AndonM.Coleman I see nothing awkward in that number. – Xirdus Jun 07 '14 at 19:48
  • @paulm I'm doing no alpha blending between sprites. – Xirdus Jun 07 '14 at 19:48

1 Answers1

4

First and foremost, you need to know the bit-depth of your depth buffer. Generally the depth buffer is fixed-point, either 16-, 24- or 32-bit.

Given a fixed-point depth buffer and the default depth range [0,1] you can make every integer value represent a uniquely distinguishable depth by using an orthographic projection matrix with 0.0 for nearVal and:

  • 16-bit:   farVal = 65535.0
  • 24-bit:   farVal = 16777215.0      // Most Common Configuration
  • 32-bit:   farVal = 4294967295.0

Then, you can assign your layered sprites up to farVal+1-many different depths (always use an integer value for sprite depth and begin with 0) and not worry about the depth buffer not being able to distinguish between the layers. In other words, the precision of your depth buffer will dictate the maximum number of layers you can have.

Andon M. Coleman
  • 42,359
  • 2
  • 81
  • 106
  • OK, but how to get depth buffer precision (ie. number of bytes)? – Xirdus Jun 07 '14 at 19:55
  • You can do the following: `GLuint depth_bits; glGetIntegerv (GL_DEPTH_BITS, &depth_bits);`. As for how you actually request a certain depth buffer precision, that depends on the framework you are using. But that is something that is done at OpenGL context creation time (unless you use an FBO, but that just complicates things more than they need to be right now). Many frameworks default to 24-bit if you do not explicitly request a specific amount. – Andon M. Coleman Jun 07 '14 at 19:59
  • "that just complicates things more than they need to be right now" are you trying to insult me? Anyway, thanks a lot. – Xirdus Jun 07 '14 at 20:05
  • @Xirdus: No, not at all. I mean I am trying to avoid discussing any extra legwork here. Setting up an FBO would unnecessarily complicate things, but it ***would*** avoid any platform-specific procedures associated with setting the precision of the *default* framebuffer's depth buffer. – Andon M. Coleman Jun 07 '14 at 20:08
  • FBO isn't that hard, you know. Also, your method of querying GL_DEPTH_BITS seems to be deprecated. But that's not a problem for me, because GLFW allows to explicitly specify it, and has documented default of 24. – Xirdus Jun 08 '14 at 15:42
  • Explaining how to blit the color buffer from an FBO into the default framebuffer and setup a renderbuffer to store the depth buffer would not add a whole lot to the conversation so that is why I did not bother. Not because I thought it was particularly complicated to do, just that it drives the conversation in a different unnecessary direction. Also, querying `GL_DEPTH_BITS` from the default framebuffer *is* invalid in a ***core*** context, you can only query it from FBO attachments. There is no way in ***core*** GL to query the size of the default framebuffer's depth/color/stencil buffers :-\ – Andon M. Coleman Jun 08 '14 at 19:03
  • If you use an FBO, you can query `GL_FRAMEBUFFER_ATTACHMENT_DEPTH_SIZE` using `glGetFramebufferAttachmentParameteriv (...)`. But there is not much point, because you would already know the size of the depth attachment since you have to create it manually ;) – Andon M. Coleman Jun 08 '14 at 19:07