The discussion started under my answer to another question. The following code determines machine epsilon:
float compute_eps() {
float eps = 1.0f;
while (1.0f + eps != 1.0f)
eps /= 2.0f;
return eps;
}
In the comments it was proposed that the 1.0f + eps != 1.0f
test might fail because C++ standard permits the use of extra precision. Although I'm aware that floating-point operations are actually performed in higher precision (than specified by the actual types used), I happen to disagree with this proposal.
I doubt that during the comparison operations, such as ==
or !=
, the operands are not truncated to the precision of their type. In other words, 1.0f + eps
can of course be evaluated with the precision higher than float
(for example, long double
), and the result will be stored in the register that can accommodate long double
. However, I think that before performing the !=
operation left operand will be truncated from long double
to float
, hence the code can never fail to determine eps
precisely (i.e. it can never do more iterations than intended).
I haven't found any clue on this particular case in C++ standard. Furthermore, the code works fine and I'm sure that the extra precision technique is used during its execution because I have no doubt that any modern desktop implementation in fact uses extra precision during calculations.
What do you think about it?