0

I'm trying to create a program in DirectX-11 that implements several techniques, namely Deferred rendering, Phong Tessellation and Shadow mapping. I have had no problem making phong tessellation and Shadow-Mapping, but that was with Forward Rendering, so now that I am looking into Deferred Rendering, I am a bit confused at how to implement and at what stage to implement these techniques.

What I think is that Phong tessellation will be applied in the Geometry pass, and shadow-mapping in the light pass? I think I understand phong tessellation in geometry pass, but how does one create a depth buffer in the light pass if you don't have the geometry information? And what about Phong lighting? Is that just a post process effect? Thank you in advance if you can shed some more light on how to implement other shading techniques with deferred rendering.

Charlie.Q
  • 43
  • 6

1 Answers1

1

The whole idea of Deferred rendering is, that you store relevant geometry information in screen space texture(s) during the geometry pass, so you have that information available in the light pass.

  1. You render the shadow map as usual, in its own pass

  2. During the geometry pass, you do everything as you used to with forward rendering (especially anything impacting the geometry, like tessellation). The only difference is, that your pixel shader during the geometry pass doesn't do any lighting computation, and instead outputs any information that you'd need for the lighting computation to multiple "GBuffer" textures (that usually includes Albedo Colour, Specular Color, Glossiness, Surface Normals, etc).

  3. During the light pass, you disable depth testing and don't bind a depth buffer. Instead, bind the depth buffer of the geometry pass as an readable texture, as well as all the GBuffer textures that you were rendering to during the geometry pass. Then you render a quad for each light source. During the pixel shader, you can now sample for all these textures using the pixel position on screen (using the SV_Position input semantic with some simple math as your UVs). That gives you the depth (from the depth buffer that you bound as texture), as well as all other information you need for the lighting computation. You can even use the sampled depth (together with the pixel position on screen and the inverse ViewProjection matrix) to reconstruct the world space location of the geometry that was rendered to that pixel during the geometry pass (use the inverse of the projection matrix alone if you prefer to do your lighting computation in view space). With that, you can now sample the shadow map and do all related lighting computation.

(When using SV_Position in the pixel shader, remember to divide all its components by its w component before using it. The same goes with the result of any matrix multiplication involving the projection matrix, except for the output of the vertex shader)

Bizzarrus
  • 1,194
  • 6
  • 10