3

I noticed that if I bind my depth buffer before the color buffer, the application works as intended:

glGenRenderbuffers(1, &_depthbuffer);
glBindRenderbuffer(GL_RENDERBUFFER, _depthbuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT16, _sw, _sh);
glGenRenderbuffers(1, &_renderbuffer);
glBindRenderbuffer(GL_RENDERBUFFER, _renderbuffer);
[_context renderbufferStorage:GL_RENDERBUFFER fromDrawable:_eaglLayer];

However, binding the depth buffer afterwards causes nothing to render, even my glClearColor setting is ignored:

glGenRenderbuffers(1, &_renderbuffer);
glBindRenderbuffer(GL_RENDERBUFFER, _renderbuffer);
[_context renderbufferStorage:GL_RENDERBUFFER fromDrawable:_eaglLayer];
glGenRenderbuffers(1, &_depthbuffer);
glBindRenderbuffer(GL_RENDERBUFFER, _depthbuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT16, _sw, _sh);

I've gotten to understand some of the flow of how OpenGL ES 2.0 works by researching the individual components thoroughly, but this seems as if it's the only thing that everyone just does in their tutorials/books, but doesn't explain why. Any ideas? Is this even an issue, or possibly something wrong in the rest of my setup? (if so I'll include all the code)

EDIT

@cli_hlt - the depth buffer is already being added to the framebuffer:

glGenFramebuffers(1, &_framebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, _framebuffer);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, _depthbuffer);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, _renderbuffer);

EDIT

Depth bound before:

enter image description here enter image description here

Depth bound after:

enter image description here

Matisse VerDuyn
  • 1,148
  • 15
  • 40
  • Check the answer to this question: http://stackoverflow.com/questions/4361516/request-a-depthbuffer-in-opengl-es-for-iphone - it seems you are missing to query the buffer size and attaching it to the framebuffer. – cli_hlt Mar 28 '12 at 14:47
  • 1
    That comes later; if it wasn't there, I wouldn't get any use out of the depth buffer, regardless of my question on order sequence. – Matisse VerDuyn Mar 28 '12 at 15:00
  • Ok. Is _sw and _sh correct? I'm asking as in the answer posted above, the answer poster did it exactly the second way and as the answer got accepted I assumed that it was working. – cli_hlt Mar 28 '12 at 15:19
  • I believe all the necessary components for the application to run are there; the only thing that changes is the bind order of the depth and color buffer to the render buffer. – Matisse VerDuyn Mar 28 '12 at 15:48
  • @MatisseVerDuyn I guess telling us that the rendering is plain black would have sufficed to clarify the problem. No need for a bunch of large images. But I'll judge this in favour of you trying to provide a good explanative question. – Christian Rau Mar 28 '12 at 16:14
  • @ChristianRau Thanks for your graciousness. – Matisse VerDuyn Mar 28 '12 at 16:24
  • I don't know much about OpenGL, but isn't binding the render buffer to a depth buffer incorrect? Binding the render buffer means that you are specifying which buffer to render to globally. So basically, the old one gets replaced by the new one. The color buffer is the correct place to render to, so when you set it last you get the correct result. The depth buffer is NOT the place to render to, so when you set it last you get an incorrect result (only the depth information gets rendered) – borrrden Apr 15 '12 at 05:16
  • @borrrden you might be on the right track, as the second glBindRenderbuffer() seems to overwrite the first. However, I don't think that's where the issue lies, since they both make a [similar] call to glRenderbufferStorage() using the current render buffer (which means that order here is irrelevant). I really think the magic happens in these lines: `glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT16, _sw, _sh);`, and `[_context renderbufferStorage:GL_RENDERBUFFER fromDrawable:_eaglLayer];` where order does seem to have some sort of significance. – Matisse VerDuyn Apr 16 '12 at 03:29

1 Answers1

3

I may be totally wrong—I'm just getting a handle on this stuff myself—but as I understand it, glBind commands only tell OpenGL which renderbuffer/texture/whatever to use for subsequent functions. It's a weird model if you're used to object-oriented programming. In the boilerplate setup code you have to bind the buffer you created to the GL_RENDERBUFFER "slot" so the next glRenderbufferStorage() or -[EAGLContext renderbufferStorage:fromDrawable:] call knows what buffer to use. I think the problem is just that you're not binding the active GL_RENDERBUFFER back to your color buffer before you're calling -[EAGLContext presentRenderBuffer:], so you're actually showing the depth buffer. Adding

glBindRenderbuffer(GL_RENDERBUFFER, _renderbuffer);

before the presentRenderBuffer: call should fix this. …I think.

davehayden
  • 3,484
  • 21
  • 28
  • I'm temporarily accepting this answer, though I'm looking for a more definitive answer from someone that is an expert on OpenGL ES 2.0 and can explain exactly why this works the way it does. – Matisse VerDuyn Apr 17 '12 at 19:21
  • Please let me know if that doesn't actually fix your problem, so I can go sort out what I don't understand about OpenGL. Thanks! – davehayden Apr 17 '12 at 20:37
  • It does work, and the concept behind your answer seems to be correct. However, in my mind, the `internalformat` parameter is still a black box. What is going on in `glRenderbufferStorage(...)` that makes any difference which buffer is bound to the renderbuffer for subsequent calls. If order matters, technically, anything the depth buffer is accomplishing is being overridden with `glBindRenderbuffer(GL_RENDERBUFFER, _renderbuffer);`, which leads me to ask, is the depth buffer doing anything? It doesn't look like it... since omitting that section of code changes **nothing** in my app. lol – Matisse VerDuyn Apr 17 '12 at 20:52
  • Yep, you're right: If you're never calling glEnable(GL_DEPTH_TEST), GL isn't touching that depth buffer anyway, so there's no reason to create the depth renderbuffer, allocate storage for it, and attach it to the framebuffer. It looks like you're just doing everything in the fragment shader and the extent of your 3D geometry is a single square.. Might as well just toss the depth stuff altogether. – davehayden Apr 17 '12 at 21:16
  • Well... here's the thing: I am using it. In earlier iterations, I was strictly moving one vertex at a time. This caused a lot of problems when transforming in certain directions (the vertex would be hidden behind vertices above and to the left of it, making a transformation in that direction pointless. To fix this, I added .001 to the z coord of the transforming vertex to make sure it stayed above in all situations. This is still the way the program works; depth testing is being applied. – Matisse VerDuyn Apr 18 '12 at 03:33