I've been working through Frank D. Luna's book "Introduction to 3D programming with DirectX 10" and one of the problems is to switch from using
D3DXCOLOR color (128 bits)
to
UINT color (32 bits)
Presumably the format code to use is: DXGI_FORMAT_R8G8B8A8_UNORM.
In my mind this means you have a variable which at the byte level has information about the channels in the exact order: RGBA (Is this the correct interpretation?--Asking because I'm sure I've read that when you want RGBA you really need a code like: A#R#G#B# where the alpha channel is specified first.
Anyway, I opted (there's probably a better way) to do:
UINT color = (UINT)WHITE;
where WHITE is defined: const D3DXCOLOR WHITE(1.0f, 1.0f, 1.0f, 1.0f);
This cast is defined in the extension to D3DXCOLOR.
However, when DXGI_FORMAT_R8G8B8A8_UNORM is used with the UINT color variable you get the wrong results. Luna attributes this to endianness.
Is this because the cast from D3DXCOLOR produces a UINT of the form RGBA but because intel x86 uses little endiann then at byte level you really get 'ABGR'?? So when this variable actually gets interpreted the shader sees ABGR instead of RGBA? Shouldn't it just know when interpreting bytes that the higher order bits are at the smaller address? And the last question: Since the code is specified as DXGI_FORMAT_R8G8B8A8_UNORM, does this mean that R should be the smallest address and A should be at the largest? I'm sure there are a ton of misconceptions I have so please feel free to dispel them.