I’ve compiled a pixel shader that uses D3DCOLORtoUBYTE4 intrinsic, then decompiled. Here’s what I found:
r0.xyzw = float4(255.001953,255.001953,255.001953,255.001953) * r0.zyxw;
o0.xyzw = (int4)r0.xyzw;
The rgba->bgra swizzle is expected but why does it use 255.001953 instead of 255.0? Data Conversion Rules is quite specific about what should happen, it says following:
Convert from float scale to integer scale: c = c * (2^n-1).