In the HLSL Pixel Shader, the code is as follows:
float Exposure_Level;
sampler Environment;
float4 ps_main(float3 dir: TEXCOORD0) : COLOR
{
// Read texture and determine HDR color based on alpha
// channel and exposure level
float4 color = texCUBE(Environment, dir);
return color * ((1.0+(color.a*64.0))* Exposure_Level);
}
this pass would be rendered to a floating-point texture, which format is A16R16G16B16. But I don't quite understand why the color should be multiplied by
((1.0+(color.a*64.0))* Exposure_Level)
which could be as large as 65 or larger.
A in color is between 0 to 1, and Exposure_Level should be greater than 0.
If the color is multiplied by a number like this, the result may be very large, and why would that still work?