1

I would like to "bypass" the classical light volume approach of deferred lighting.

Usually, when you want to affect pixels within a pointlight volume, you can simply render a sphere mesh.

I would like to try another way to do that, the idea is to render a cube which encompass the sphere, the cube is "circumscribes" to the sphere so each face's center is a sphere's point. Then you only have to know from your point of view which fragment would be a part of the circle (the sphere on your screen) if you had render the sphere instead.

So the main problem is to know which fragment will have to be discarded. How could I do that: Into the fragment shader, I have my "camera" world coordinates, my fragment world coordinates, my sphere world center, and my sphere radius. Thus I have the straight line whose the orientation vector is modelized by camera-fragment world points. And I can build my sphere equation. Finally I can know if the line intersect the sphere.

Is is correct to say that, from my point of view, if the line intersect the sphere, thus this fragment must be considered as an highlighted fragment (a fragment that would have been rendered if I had rendered a sphere instead) ?

Qzaac
  • 159
  • 1
  • 8
  • 2
    Usually a fragment is only lighted when it itself is inside the lights sphere. The discard check can then be done just by checking whether lenght(fragment - sphereCenter) <= sphereRadius. I don't exactly get which additional information you try to get from the camera-fragment ray? If the ray intersects the sphere but the fragment is not in the sphere, then it will be behind the lightsources range and thus unlit. – BDL Jun 12 '16 at 12:53
  • @BDL, I think my post is not clear enough, you are not rendering a sphere mesh, but a cube mesh encompassing the supposed sphere mesh so basically all your fragment are not on sphere's surface. Thus the check "lenght(fragment - sphereCenter) <= sphereRadius" doesn't really mean something here because the fragment is not on the sphere. – Qzaac Jun 12 '16 at 13:12
  • Here is a scheme of what I mean: [link](http://images.google.fr/imgres?imgurl=http%3A%2F%2Fwww.svpvril.com%2FCosmology%2F4SphereCube.gif&imgrefurl=http%3A%2F%2Fwww.svpvril.com%2FCosmology%2Fcos6.html&h=418&w=443&tbnid=D3dtzXOKOOko2M%3A&docid=o_rRqiEC4k6wHM&ei=0V9dV82HDsP7Ur3JrJgJ&tbm=isch&iact=rc&uact=3&dur=616&page=1&start=0&ndsp=29&ved=0ahUKEwiNtLm6yKLNAhXDvRQKHb0kC5MQMwghKAAwAA&bih=775&biw=1600) – Qzaac Jun 12 '16 at 13:15
  • Yes, if the line defined by the camera position and the position of the fragment on the cube don't intersect the sphere, then you can discard the fragment. However, an area between the encompassing cube and the sphere can also satisfy the condition, so you're not perfectly discarding everything outside the sphere this way. I suppose the question is why would you want to do it this way? – Quinchilion Jun 12 '16 at 14:31
  • @Quinchilion: Thanks for your answer. I want to it this way because: - If your want to have an accurate light volume with mesh you need to have a sphere with a lot of polygons. - For another type of light: spotlight. Because spotlight has two parameters: radius and lenght for the cone light volume and I think that dynamically generate/adjust the cone mesh with geometry shader from a base cone is a paintfull process, I prefer deal with intersection (I don't know is my approach is less optimized or not). – Qzaac Jun 12 '16 at 16:03
  • 2
    @Yoo: "*If your want to have an accurate light volume with mesh you need to have a sphere with a lot of polygons.*" But you don't *need* an "accurate light volume". The only reason you're using a light volume *at all* is to not have to render a full-screen quad, with a bunch of needless FS invocations. Your FS still needs to use the position and size of the sphere to do attenuation properly. So if your "light volume" is slightly smaller than the actual sphere size... so what? You still saved lots of FS invocations, compared to the full-screen quad. – Nicol Bolas Jun 12 '16 at 16:23

1 Answers1

2

Thus the check "lenght(fragment - sphereCenter) <= sphereRadius" doesn't really mean something here because the fragment is not on the sphere.

So what?

The standard deferred shading solution for lights is to render a full-screen quad. The purpose of rendering a sphere instead is to avoid doing a bunch of per-fragment calculations for fragments which are outside of the light source's effect. This means that the center of that sphere is the light source, and its radius represents the maximum distance for which the source has an effect.

So the length from the fragment (that is, reconstructed from your g-buffer data, not the fragment produced by the cube) to the sphere's center is very much relevant. That's the length between the fragment and the light source. If that is larger than the sphere radius (AKA: maximum reach of the light), then you can cull the fragment.

Or you can just let your light attenuation calculations do the same job. After all, in order for lights to not look like they are being cropped, that sphere radius must also be used with some form of light attenuation. That is, when a fragment is at that distance, the attenuation of the light must be either 0 or otherwise negligibly small.

As such... it doesn't matter if you're rendering a sphere, cube, or a full-screen quad. You can either cull the fragment or let the light attenuation do its job.


However, if you want to possibly save performance by discarding the fragment before reading any of the g-buffers, you can do this. Assuming you have access to the camera-space position of the sphere/cube's center in the FS:

  1. Convert the position of the cube's fragment into camera-space. You can do this by reverse-transforming gl_FragCoord, but it'd probably be faster to just pass the camera-space position to the fragment shader. It's not like your VS is doing a lot of work or anything.

  2. Because the camera-space position is in camera space, it already represents a direction from the camera into the scene. So now, use this direction to perform part of ray/sphere intersection. Namely, you stop once you compute the discriminant (to avoid an expensive square-root). The discriminant is:

    float A = dot(cam_position, cam_position);
    float B = -2 * (dot(cam_position, cam_sphere_center);
    float C = (dot(cam_sphere_center, cam_sphere_center)) - (radius * radius)
    float Discriminant = (B * B) - 4 * A * C;
    

    If the discriminant is negative, discard the fragment. Otherwise, do your usual stuff.

Nicol Bolas
  • 449,505
  • 63
  • 781
  • 982
  • I think he actually has a point. The world space distance between the fragment and the sphere center is meaningless if that fragment lies on a cube encompassing the sphere. In that case, all the fragments will be outside the sphere. If you want to cull the light properly, you need to find the intersection between the fragment's ray and the sphere, instead of just the distance. If there is no such intersection, you can discard the fragment. – Quinchilion Jun 12 '16 at 14:15
  • That is, unless by fragment you mean the shaded point in the scene as reconstructed from the depth buffer. In that case, it's probably worth clarifying. – Quinchilion Jun 12 '16 at 14:32
  • 1
    I assumed that "fragments position" means world position of the fragment stored at this position (reconstructed from depth-buffer, etc). If the position on the cube itself is meant, then the thing looks different. But still I do not see any point in performing this relatively costly sphere-ray intersection. – BDL Jun 12 '16 at 14:34
  • 1
    @Quinchilion: Generally when doing lighting passes, the fragment that generated a particular FS invocation is relevant only in that it allows you to access the right g-buffer data. Once you've done that, it is effectively meaningless; you no longer care about the rasterizer's per-fragment data. So yes, when I referred to the position of the fragment, I meant after unpacking the g-buffer data. I've clarified that. – Nicol Bolas Jun 12 '16 at 14:38
  • 1
    @NicolBolas I suppose that the ray-sphere intersection could discard fragments even before you sample the depth buffer, saving some bandwidth. If you had to use a cube mesh for culling lights for whatever reason, I think it would be a good idea. – Quinchilion Jun 12 '16 at 14:48
  • 1
    @Quinchilion: You wouldn't need full ray/sphere intersection. You only need the discriminant (the stuff you take the square-root of). You only need to tell whether the rasterized fragment is in the path of the sphere, not where exactly it is. So if the discriminant is negative, you discard, but otherwise, you just do your normal stuff. – Nicol Bolas Jun 12 '16 at 14:57
  • @NicolBolas Right, that's what I meant. And considering that you can interpolate the fragment->sphere vector from the vertex shader, it becomes a trivial check. – Quinchilion Jun 12 '16 at 15:09
  • @Quinchilion: I wouldn't call it trivial. It requires several multiply operations and some dot-products. It's just not particularly burdensome. – Nicol Bolas Jun 12 '16 at 15:12
  • By "fragment position" I mean a fragment of the light volume that is rendered during the light pass. You make me doubt about my way of explaining problems. – Qzaac Jun 12 '16 at 16:10
  • @NicoleBolas I think you misunderstood what I want to do.If you have a pointlight you have to render a sphere mesh to affect all fragments within the sphere volumes. But if you want to have an accurate light volume, you must render a complex sphere mesh (a lot of polygons...). I want to bypass it, so I want to render a cube encompassing my theoretical sphere (only 12 triangles for a cube), and then **from my camera point of view** keep my light volume fragments that _would have been rendered on the screen_ if I has rendered a mesh sphere. Which cube's fragment would I have to discard? – Qzaac Jun 12 '16 at 16:30
  • @Yoo: "*If you have a pointlight you have to render a sphere mesh to affect all fragments within the sphere volumes.*" No, you don't. You can render a full-screen quad, and just have the lighting model's attenuation make that light contribution 0 past the "sphere volume". The reason to use a sphere is as an optimization: to pre-cull those 0 contribution FS invocations. – Nicol Bolas Jun 12 '16 at 16:38
  • @NicolBolas If you render a full-screen quad on calculate each light affect for each pixel, how could you know which pixel is out of "the sphere volume" ? Did you want to say a full-screen at the world position of the sphere volume? – Qzaac Jun 12 '16 at 17:18
  • 1
    @Yoo: Point lights do not have a volume; they're just points in space. They only have a volume if they have *attenuation*; if the strength of the light falls off with distance from the light. Thus, the light's "volume" is merely the point when the attenuation is so great that the light no longer has any visible effect. You can still *perform* calculations beyond that point, but you just won't notice anything. So you can render a full-screen quad over the screen, using light attenuation. Rendering a sphere is just an optimization of this, removing fragments that won't be affected. – Nicol Bolas Jun 12 '16 at 17:26
  • thanks a lot for your answers (@NicolBolas @Quinchilion), I guess I misunderstood some fundamental principles of deferred lighting. – Qzaac Jun 15 '16 at 10:33