2

My situation is this: a 2D image, as a 2D texture, is produced from a software renderer that actually illustrates a "3D" visual. OpenGL is then used essentially for nothing more than displaying this 2D texture. As a result, despite rendering what appears to be a 3D visual, regardless of any effort I might make with a shader to render the depth buffer, it cannot be done as there is really nothing there. I would like to have access to the depth buffer to enable such shaders.

So, I would like to somehow populate the depth buffer based on my image. I think this could be doable as the software renderer in question can produce a "depth map" image as well as its "regular" image as a mode of rendering — the depth map image looking exactly like a rendering of the depth buffer (greyscale, objects closer to the camera are black). So I suppose my question is: is it possible for me to translate a "pre-rendered" image representing the depth into the depth buffer? How could I go about doing this?

Edit: If this is helpful, I am specifically working with OpenGL 3.3.


Edit 2: Continuing to research what I might be able to do here I have found this discussion which suggests I "either use framebuffer objects or a fragment shader which writes to gl_FragDepth." However the discussion quickly becomes a bit much for me to digest, I think I understand the concept of a fragment shader which writes to gl_FragDepth however how does this actually work in practice?

I am thinking I do something like the following pseudocode?

program = createProgram(); //write to gl_FragDepth in the frag shader
glUseProgram(program);

glColorMask(GL_FALSE,GL_FALSE,GL_FALSE,GL_FALSE);
glEnable(GL_DEPTH_TEST);
glGenTextures(1, &depth_texture);
glBindTexture(GL_TEXTURE_2D, depth_texture);

glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, depth->width, depth->height, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_INT, depth->pixels)
glDisable(GL_DEPTH_TEST);
glBindTexture(GL_TEXTURE_2D, 0);

Do I need to enable depth testing?


Edit 3:

If I understand correctly, after doing some more reading, I think I need to do something like the following, however I can't quite get it to work. Does something here look glaringly incorrect? What I find to be happening is in the frag shader the sampler2Ds tex0 and tex1 contain the same values somehow, as a result, I am either able to write the color values to gl_FragDepth or the depth values to color which creates interesting but unhelpful results.

Summarized Frag Shader:

out vec4 color;
uniform sampler2D tex0; // color values
uniform sampler2D tex1; // depth values

void main(void) {
    color = texture(tex0, uv);
    gl_FragDepth = texture(tex1, uv).z;
}

Summarized OpenGL:

// declarations
static GLuint vao;
static GLuint texture = 1;
static GLuint depth_texture = 2;

// set up shaders
program = createProgram();
glUseProgram(program); //verified that this is working

// enable depth testing
glEnable(GL_DEPTH_TEST);

// prepare dummy VAO
glGenVertexArrays(1, &vao);
glBindVertexArray(vao);

// prepare texture for color values
glActiveTexture(GL_TEXTURE0);
glGenTextures(1, &texture);
glBindTexture(GL_TEXTURE_2D, texture);

// prepare texture for depth values
glActiveTexture(GL_TEXTURE1);
glGenTextures(1, &depth_texture);
glBindTexture(GL_TEXTURE_2D, depth_texture);

// disable depth mask while working with color values
glDepthMask(GL_FALSE);

// select GL_TEXTURE0 and bind the color values
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, texture);

// specify texture image for colorvalues
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, tex_width, tex_height, 0, TEX_FORMAT, TEX_TYPE, fb->pixels);

// enable depth mask while working with depth values
glDepthMask(GL_TRUE);

// select GL_TEXTURE1 and bind the depth values
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, depth_texture);

// specify texture image for depth values
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, tex_width, tex_height, 0, TEX_FORMAT, TEX_TYPE, fb->depth);

// draw
glViewport(win_x, win_y, win_width, win_height);
glDrawArrays(GL_TRIANGLES, 0, 3);
Rabbid76
  • 202,892
  • 27
  • 131
  • 174
Aaa
  • 123
  • 1
  • 9
  • @Rabbid76 I am a bit confused by what you are saying, I have updated my question to show my code and what I am doing — it seems I am very close to having this working so I am not sure why you say I need a Framebuffer Object as I seem to be successfully writing color and depth values (albeit the wrong values) without one. I would super appreciate if you would take a look at my edited question and let me know if (based on that) I do in fact need this Framebuffer object. Thanks so much for your help!! – Aaa Jul 11 '18 at 16:25
  • What Rabbid76 suggests is about rendering into a FBO instead of the default *frame buffer*. To write the depth into a FBO you need a special texture, defined by `GL_DEPTH_COMPONENT`. Then, in a second pass, you can render the FBO into the default frame buffer. – Ripi2 Jul 11 '18 at 16:29
  • Sorry, but with that said, is rendering into a FBO instead of the default frame buffer an alternative to what I am doing or a correction to what I am doing? – Aaa Jul 11 '18 at 16:41

2 Answers2

2

I think you're a bit confused about textures. This is not a tutorial site, so I'm not going to be too much descriptive.

The goal of OpenGL is drawing pixels into the window (or into an internal frame buffer). The window is a 2D container of colors. Each color is 4-value (RGBA) in {0,1} range (it will somehow converted to {0,255} range).

Because several primitives (points, lines or triangles) may be drawn in the same {x,y} position, the 3rd coordinate (z aka "depth") is used, and stored in the so called "depth buffer".

When a fragment shader outputs a pixel with {x,y,z} coordinates, a depth-test is done with the fragment z-value against the current value in the depth buffer. Depending on the function used for this test, this current value is replaced or not with the z-value. This way allows, for example, the typical situation where pixels nearer to camera "survive" and the rest are "occluded" (read: forgotten).

A texture is nothing but a buffer, which has the feature than its values can be fetched by coordinates instead of indices. This process is called "sampling".

Textures can be 1/2/3 dimensional buffers, each dimension in {0, 1} range. You can specify what to do when your sample coordinates don't match exactly a "cell" in the texture. This is called "minification/magnification".

As buffers they are, textures can store many different type of values, from a single 'byte' to composed 'RGBA' values. The can store depth components, but don't confuse it with the frame depth buffer.
You must tell how the texture is filled (e.g. read 4 float values, other e.g. read 1 byte) and how they are internally stored and sampled. See doc.

Now we're getting close to your issue. You can have a 2D texture that store the colors and another 2D texture that stores the z-values. In the fragment shader you sample both and write outputs for {x,y} color and gl_FragDepth.

Any data may come from the vertex-to-fragment interpolator, or by textures, or by other intermediate shaders or other buffers. The paragraph above with two textures is just an example.
The point is that it's you who knows what and where data is stored and how to retrieve and use it to output {x,y,z} values.


EDIT, after your 3rd edition

glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA,..., fb->depth)

The GL_RGBA type should match (or at least easy to convert from) fb->depth data. For example if you have 'float' values, then you can store them internally in the 'red' channel of the texture, with a 32 bits width:

glTexImage2D(GL_TEXTURE_2D, 0, GL_R32F,..., fb->depth)

A float value in your fb->depth means that TEX_FORMAT should be an only channel, like GL_RED and TEX_TYPE is a GL_FLOAT.
If you have integer values (e.g. in {0, 255} range), use different parameters or just normalize them: divide by 255 to get {0,1} range values.

In the fragment shader read the same channel you set in glTexImage2D. So, for the fb->depth:

gl_FragDepth = texture(tex1, uv).r;

Don't forget to define how uv texture coordinates are obtained. Likely they are an "out" of the vertex shader, "in" in the FS.

Ripi2
  • 7,031
  • 1
  • 17
  • 33
  • Thank you for this, it turns out I was more than "a bit" confused about textures. I've done lots of reading since but this did set me on track to understanding my problem and I'm nearing a solution as a result. So, I can have a 2D texture that stores colors and one that stores z and I have the the data to populate those textures, but I am confused by how I then manage those textures? I have updated my question to clarify what, more technically speaking, has me confused, if you could help me to understand a little bit more technically (based on the provided code) I would greatly appreciate it! – Aaa Jul 11 '18 at 16:14
2

I want to point out Ripi2's answer truly guided me through this question, however I thought I'd write up an answer to address my question, what I ended up needing to do, and what was wrong with my question, having now found my way through this problem.

First, what was wrong with my question? I had several misunderstandings—but most fundamentally—about OpenGL textures. I misunderstood a texture to be a "container" for an image, where my understanding now is that it is a buffer and can contain information other than simply "an image" (for example, the texture can store the z-depth data).

My problem still existed as I understood it though: I had two prerendered images, one illustrating the color data and one illustrating the depth buffer data. To solve this, first I had to understand how to manage two textures: one containing color data and one containing depth data, so that I could then sample them.

What I was missing in my OpenGL code (from my third edit in my question) was essentially the following:

// get the uniform variables location
depthValueTextureLocation = glGetUniformLocation(program, "DepthValueTexture");
colorValueTextureLocation = glGetUniformLocation(program, "ColorValueTexture");

// specify the shader program to use
glUseProgram(program);

// bind the uniform variables locations
glUniform1i(depthValueTextureLocation, 0);
glUniform1i(colorValueTextureLocation, 1);

My Frag shader samplers then ended up looking like this to match:

 uniform sampler2D ColorValueTexture;
 uniform sampler2D DepthValueTexture;

It was only at this point that I A) now not only had the textures but also understood how to sample them in my shader and B) had my data in the right places so I could learn what exactly was going on when I was drawing. I was seeing a confusing result where data from one of my textures seemed to be appearing in the other, I was able to resolve this as well by splitting up my "drawing phase" into two smaller phases like such:

First I drew using the color texture:

// select the color value binding
glActiveTexture(GL_TEXTURE0 + 1);
glBindTexture(GL_TEXTURE_2D, texture);

// draw
glDrawArrays(GL_TRIANGLES, 0, 3);

And then I drew using the depth texture:

// select the depth value binding
glActiveTexture(GL_TEXTURE0 + 0);
glBindTexture(GL_TEXTURE_2D, depth_texture);

// draw
glEnable(GL_DEPTH_TEST);
glDrawArrays(GL_TRIANGLES, 0, 3);
glDisable(GL_DEPTH_TEST);

It's important to notice that I specifically only enabled depth testing when working with the depth texture!

Aaa
  • 123
  • 1
  • 9
  • Why would you call `glDrawArrays` two times? Your fragment shader samples two textures at once (`GL_TEXTURE0` and `GL_TEXTURE0 + 1`) so I would guess the first `glDrawArrays` is completely redundant. – Jacek Jun 26 '23 at 17:10