This question is regarding a code example in Section 7.4 of Beej's Guide to Network Programming.
Here is the code example.
uint32_t htonf(float f)
{
uint32_t p;
uint32_t sign;
if (f < 0) { sign = 1; f = -f; }
else { sign = 0; }
p = ((((uint32_t)f)&0x7fff)<<16) | (sign<<31); // whole part and sign
p |= (uint32_t)(((f - (int)f) * 65536.0f))&0xffff; // fraction
return p;
}
Why is the bitwise-AND with 0xffff
required to store the fraction?
As far as I understand, f - (int) f
is always going to be a number that satisfies the inequality, 0 <= f - (int) f < 1
. Since this number is always going to be less than 1, this number multiplied by 65536 is always going to be less than 65536. In other words, this number would never exceed 16 bits in its binary representation.
If this number never exceeds 16 bits in length, then what is the point in trying to select the least significant 16 bits with & 0xffff
. It seems like a redundant step to me.
Do you agree? Or do you see a scenario where the & 0xffff
is necessary for this function to work correctly?