I'm studying shadowmapping shaders at the moment and one question that arises is why do we first shade surfaces(lambert/phong) and then map shadows onto them, as opposed to just additively lighting pixels up? That strikes me as the opposite of how light should work.
If we're using shadowmapping, it implies that we have created a texture that tells us which screen space pixels are exposed to the light source. Why would we need to do the whole lambert\phong calculation then? Why not derive illumination directly from combining shadowmaps?
Here is a pseudoalgorithm of the way I see it:
- All pixels of the output image are completely black by default.
- Generate unlit albedo screen texture
- Generate the shadowmap for the light
- Modify the output image. We're modifying only the pixels that are exposed to the lightsoure(matching the shadowmap depth +- bias, whatever), make the exposed pixels lighter according to light intensity, attenuation, etc.
- Select next light
- Goto step 3
The result if this would be an image with the correct light intensity. Now we multiply the intensity texture by the albedo texture and get the final image. As far as I can tell this only requires 3 textures at a time, is that too much emory?
I assume that there must be a reason why people don't already do this, I guess I just need someone to point out why.