1

The AlphaRef is set with D3DRS_ALPHAREF, which is an integer ranging from 0x00000000 to 0x000000FF. However, AlphaValue computed by shader is a float value in range [0.0, 1.0]. So what is the expected comaparison of the two value, since them have different representing precision? Does the float AlphaValue need to do extra rounding operation?

wells hong
  • 11
  • 1

1 Answers1

1

In Direct3D 9 shaders, all input data is converted to floating-point and then converted as needed on output.

D3DRS_ALPHAREF does not exist in modern DirectX 10 or later. For an example of a shader implementation of Alph Testing. see AlphaTestEffect.fx which was adapted from the original XNA Game Studio 4 implementation.

Chuck Walbourn
  • 38,259
  • 2
  • 58
  • 81