One obvious problem when computating log (a/b)
, where a
and b
are two non-zero positive finite floating-point operands of a given precision (here called the native precision), is that the quotient a/b
may not be representable as a floating-point number in that precision. Furthermore, accuracy will be lost when the ratio of the source operands is close to unity.
This could potentially be worked around by temporarily switching to higher-precision computation. But such higher precision may not be readily available, for example when the native precision is double
and long double
simply maps to double
. The use of higher precision computation could also have a very significant negative impact on performance, for example on GPUs where the throughput of float
computation may be up to 32 times higher than the throughput of double
computation.
One could decide to use the quotient rule of logarithms to compute log (a/b)
as log(a) - log(b)
, but this exposes the computation to the risk of subtractive cancellation when a
and b
are close to each other, resulting in very large errors.
How can the logarithm of the quotient of two floating-point numbers of given precision be computed both accurately, e.g. with an error of less than 2 ulps and robustly, i.e. with no underflow and overflow in intermediate computation, without resorting to higher than native precision computation?