According to this thread: To Compute log(a+b)
Sometimes log_sum is implemented like this:
log(a + b) = log(a * (1 + b/a)) = log a + log(1 + b/a)
I'm confused about why this approach is more efficient.. Does anyone have ideas about this?
According to this thread: To Compute log(a+b)
Sometimes log_sum is implemented like this:
log(a + b) = log(a * (1 + b/a)) = log a + log(1 + b/a)
I'm confused about why this approach is more efficient.. Does anyone have ideas about this?
This approach might be useful, when a
is constant (at least for some b
values), and b<<a
(significantly smaller). In this case log(1 + b/a)
could be calculated through Taylor series expansion fast and with good precision (log1p
function in some math libraries, another method)
One place where I've seen this sort of thing is when dealing with probabilities, or likelihoods, on high dimensional spaces. One sometimes wants to compute sums like
p1 + p2 + ..
However such probabilities can often be too small to be represented in doubles, so one often works with the log of the probabilities instead. Then we want to compute
log( exp(l1) + exp( l2) + ..)
where l is the log of p1 etc. The problem is that if one just evaluates the exps, one could well get 0, and then the expression becomes indefined. But the trick you allude comes to the rescue, we can evaluate
l1 + log( 1 + exp(l2-l1) + ...)
and this will evaluate (at least if l1 is the biggest of the l's) reasonably.
So it's not a matter of efficiency, but of getting round the limited precision of doubles.