I've just implemented deferred rendering/shading for the first time and I was surprised to see the big perfomance gap between forward and deferred rendering.
When I run my application with forward rendering I get a pretty decent frame rate while running in Release mode
However when I ran it with deferred rendering it gave me a rather surprising output
I'm well aware of that deferred rendering is NOT something you coat an application with to make it go "faster". I consider it to be a performance optimization technique that can be optimized in a numerous ways and I understand that the technique has a larger memory footprint than forward rendering.
However...
I've currently got ONE point light in the scene and one hundred cubes created with hardware instancing. The light is moving back and forth on the Z-axis casting light on the cubes.
The problem is that the light is very laggy when moving. It's so laggy that the application doesn't register the keyboard input. Honestly, I was not prepared for this and I assume that I'm doing something terrible bad in my implementation.
So far I've changed the texture format on the gbuffers from DXGI_FORMAT_R32G32B32A32_FLOAT
to DXGI_FORMAT_R16G16B16A16_FLOAT
jsut to see if it had any visual impact but it did not.
Any suggestions? Thank you!
SIDE NOTE
I'm using the Visual Studio Graphics Diagnostics for debugging my DirectX applications