0

I'm studying shadowmapping shaders at the moment and one question that arises is why do we first shade surfaces(lambert/phong) and then map shadows onto them, as opposed to just additively lighting pixels up? That strikes me as the opposite of how light should work.

If we're using shadowmapping, it implies that we have created a texture that tells us which screen space pixels are exposed to the light source. Why would we need to do the whole lambert\phong calculation then? Why not derive illumination directly from combining shadowmaps?

Here is a pseudoalgorithm of the way I see it:

  1. All pixels of the output image are completely black by default.
  2. Generate unlit albedo screen texture
  3. Generate the shadowmap for the light
  4. Modify the output image. We're modifying only the pixels that are exposed to the lightsoure(matching the shadowmap depth +- bias, whatever), make the exposed pixels lighter according to light intensity, attenuation, etc.
  5. Select next light
  6. Goto step 3

The result if this would be an image with the correct light intensity. Now we multiply the intensity texture by the albedo texture and get the final image. As far as I can tell this only requires 3 textures at a time, is that too much emory?

I assume that there must be a reason why people don't already do this, I guess I just need someone to point out why.

BMC
  • 3
  • 1

1 Answers1

0

Shadow mapping is limited by the precision of the texture both in size and bit depth.

Analytical shading based on light vector and normal have much high accuracy (ignoring the subject of normal map) but also model different aspect of the same phenomenon. While overlapping they provide a more accurate representation. Even if you take out non-diffuse terms, like specularity, the geometry alone (the only data captured by the depth pass) doesn't capture many aspects such as micro-facets and small scale self-shadowing.

Also, Shadow mapping in itself is a binary operation, and you need to accumulate many samples ("taps") or process the output in some way to obtain penumbras or any intermediary values.

So in practice many "terms" are indeed additive, shadow mapping is just one component that is multiplied because it provides extra occlusion information.

enter image description here

here's a crude sample from Unity, the image on the bottom is a capture of the screen-space shadow buffer. Top part has both specular and diffuse terms.

Brice V.
  • 861
  • 4
  • 7