0


I'm currently learning about how to generate omnidirectional shadowmap for point light from the following resource: https://learnopengl.com/Advanced-Lighting/Shadows/Point-Shadows

The author is talking about layered rendering where instead of doing 6 passes for each cubemap face we use one pass and multiply the number of fragment shader by 6 for each mesh by using the geometry shader stage for layered rendering.

He achieves this with the following vertex, geometry, and fragment shaders respectively:

#version 330 core
layout (location = 0) in vec3 aPos;

uniform mat4 model;

void main()
{
    gl_Position = model * vec4(aPos, 1.0);
}  
#version 330 core
layout (triangles) in;
layout (triangle_strip, max_vertices=18) out;

uniform mat4 shadowMatrices[6];

out vec4 FragPos; // FragPos from GS (output per emitvertex)

void main()
{
    for(int face = 0; face < 6; ++face)
    {
        gl_Layer = face; // built-in variable that specifies to which face we render.
        for(int i = 0; i < 3; ++i) // for each triangle vertex
        {
            FragPos = gl_in[i].gl_Position;
            gl_Position = shadowMatrices[face] * FragPos;
            EmitVertex();
        }    
        EndPrimitive();
    }
}  
#version 330 core
in vec4 FragPos;

uniform vec3 lightPos;
uniform float far_plane;

void main()
{
    // get distance between fragment and light source
    float lightDistance = length(FragPos.xyz - lightPos);

    // map to [0;1] range by dividing by far_plane
    lightDistance = lightDistance / far_plane;

    // write this as modified depth
    gl_FragDepth = lightDistance;
}  

What I'm having hard time to understand is why bother with keeping FragPos between the geometry shader and the fragment shader?

I know he uses it to calculate a linear distance to the light position but why can't we just get linear z coordinate like so:

#version 330 core

uniform float u_Near;
uniform float u_Far;

float linearizeDepth(float depth)
{
    float z = depth * 2.0 - 1.0; // back to NDC 
    return (2.0 * u_Near * u_Far) / (u_Far + u_Near - z * (u_Far - u_Near));
}

void main()
{
    float depth = linearizeDepth(gl_FragCoord.z);
    gl_FragDepth = depth;
}

Also I would like to understand how one could use the attenuation variables of a point light (constant quadratic linear) to calculate the far z plane of the projection matrix for the shadow mapping.

Jorayen
  • 1,737
  • 2
  • 21
  • 52
  • "*Also I would like to understand how one could use the attenuation variables of a point light (constant quadratic linear) to calculate the far z plane of the projection matrix for the shadow mapping.*" I don't understand what you mean by that. There is no code here doing attenuation, so I fail to see how this relates to your question. – Nicol Bolas Apr 25 '20 at 23:36
  • Your code never calls `linearizeDepth`. Also, `gl_Layer = face;` doesn't work; all `out` variables are given undefined values when you call `EmitVertex`, and `gl_Layer` is a per-vertex value, not a per-primitive one (there are no such things as per-primitive values in GS's). – Nicol Bolas Apr 25 '20 at 23:37
  • I mean to ask I want to understand the relation between light point attenuation to the far plane of the projection of the shadowmap. I don't see a point in posting code since this is not really about code. How would I define the projection far plane given attenuation rather than defining it arbitrary. – Jorayen Apr 26 '20 at 01:01
  • About your second argument I don't really know what to tell you other than this shader code was copied from the article I've posted which worked for alot of people as can be seen in the comment section of the article. So I'm not sure why you say it doesn't suppose to work. – Jorayen Apr 26 '20 at 01:02
  • "*So I'm not sure why you say it doesn't suppose to work.*" Because it isn't required to. `EmitVertex` [makes the values of outputs undefined](https://www.khronos.org/registry/OpenGL/specs/gl/GLSLangSpec.4.60.html#geometry-shader-functions). `gl_Layer` is a [per-vertex output from the GS](https://www.khronos.org/registry/OpenGL/specs/gl/GLSLangSpec.4.60.html#geometry-shader-special-variables). Therefore, calling `EmitVertex` makes `gl_Layer`'s value undefined. The code exhibits undefined behavior; if it "works", then it is merely good fortune, not how to write proper code. – Nicol Bolas Apr 26 '20 at 03:03
  • "*I want to understand the relation between light point attenuation to the far plane of the projection of the shadowmap.*" There isn't a relation between them. Light attenuation has nothing to do with shadow mapping. Not unless you're using a shadow map to encode information that deals with attenuation for some reason. The two are entirely orthogonal. Why do you think that light attenuation needs a "far plane", let alone one that is associated with a shadow map? And what does this have to do with your primary question? – Nicol Bolas Apr 26 '20 at 03:05
  • 1
    *" I would like to understand how one could use the attenuation variables of a point light (constant quadratic linear) to calculate the far z plane"* - Do you mean the maximum range of the light source? Even a light with a attenuation becomes never zero. You have to define a threshold (e.g.1/256). The distance where the light falls below is the maximum distance. To get the distance you have to solve a [Quadratic equation](https://en.wikipedia.org/wiki/Quadratic_equation): `threshold = a*maxDist^2+b*maxDist+c` – Rabbid76 Apr 26 '20 at 06:22
  • 1
    *"why can't we just get linear z coordinate like so:"* Of course you can, if `shadowMatrices[face]` contains a perspective projection matrix. But why should you do the computation per fragment, if this is possible per vertex? Furthermore you have to set the extra uniforms `u_Near` and `u_Far`, which have to correspond to the perspective projection matrix. – Rabbid76 Apr 26 '20 at 06:28
  • @NicolBolas so how can I assign an output variable in the geometry shader if `EmitVertex` causes undefines? Also in the spec it actually says `gl_Layer` is not a per-vertex output from GS. – Jorayen Apr 26 '20 at 10:54

0 Answers0