The frexp()
function returns a 'normalized' value in the range (±)[0.5 – 1.0). Clearly, this is not a range that can be properly represented in a variable of integral type (a simple cast of that value would always yield zero, as the range does not include ±1.0
), so it has to be 'denormalized' (stretched) into a range that is fully representable.
Multiplying by INT_MAX
will give (nearly) the greatest precision possible (assuming int
and unsigned long
have the same bit-width)†, without overflowing the range of the destination type (including the possibility of storing the representation of a negative value in that unsigned integer).
Note: One could get more precision by storing the sign of the normalized fraction, then subtracting 0.5 from its absolute value, re-applying the sign and multiplying by 2.0 * INT_MAX
(I think this will be safe) … but the precision gain (1 bit) is likely not worth the extra effort in coding (and decoding) the stored value.
† On many platforms, the int
and long
types are the same size; however, this is not required so, as mentioned in the comments, using LONG_MAX
as the multiplier/divisor would potentially offer greater precision; however, that may be overkill, depending on how many bits of mantissa there are in the source. If it's an IEEE-754 single-precision float
, it will have 23 bits, so a 16-bit int
type would lose out, but a 64-bit LONG_MAX
would be over-cooking.