0

I'm doing error analysis, and I would like to know if there's a good rule of thumb for when to stop adding terms to an infinite sum, or multiplying terms to an infinite product. After reading a lot of numeric code, what I've derived so far is the following.

For infinite sums, we should stop when the next term is near 0. If our target sum were about 1, then the machine epsilon would denote when the next term will be too small to make a contribution. Thus, our machine epsilon multiplied by our current running total will be roughly the right size to indicate when the term will be too small. (I've also seen a variant where the machine epsilon is compared to the next term divided by the running total.)

If the contributing terms can be negative, then absolute-value brackets need to be added in the right places, but otherwise I don't think that there's a two-tailed variant of the one-tailed test.

For infinite products, we should stop when the next term is near 1. If our target product were near 1 as well, then the square root of the machine epsilon (which is bigger than the epsilon itself!) would indicate when our error is negligible. So we can scale the square root of the machine epsilon by the running total in order to see if our next term is too small.

As before, if the contributing terms can fall below 1, then we just have to be more careful with signs and absolute values.

Am I on the right track? Are there better ways to do this? Thanks for reading.

Corbin
  • 1,530
  • 1
  • 9
  • 19
  • 3
    You seem to be assuming monotonic decrease of series terms. It's not always so. Moreover, no one ever guarantees that you won't get several small (or even zero) terms followed by a non-negligible tail. You have to analyze your series on a case-by-case basis to get reliable results. – Ruslan Oct 10 '20 at 15:14
  • 1
    As Ruslan notes, there is and can be no general rule. Series can behave in a variety of ways. If you can analyze a particular series, you may be able to figure out particular rules for it. However, for a concrete example of why “stop when the terms are small” will not work, consider the sum of 1/n for n from 1 to infinity. If you are patient enough, your method would stop when the terms are no longer increasing the sum in floating-point arithmetic. But the true sum is infinity. So the error in this method is infinite. – Eric Postpischil Oct 10 '20 at 22:38
  • 1
    Also, in general, one would often get a more accurate result by starting with the smallest terms and adding them. E.g., even with a convergent series, and let’s say it is monotonically decrease, if you stop when the terms are no longer changing the sum, you will get a different result that if you start even farther out in the series and add the small terms there, proceeding backwards to the large sums. Furthermore, your method will give different results for the “same” series expressed in different forms (e.g., an alternating series versus the same series with pairs of terms combined). – Eric Postpischil Oct 10 '20 at 22:41
  • 1
    @Eric Postpischil In practice, this doesn't help much. How do you know in advance what the smallest term is in order to perform a reverse summation? – Paul Floyd Oct 12 '20 at 13:44

1 Answers1

1

For SUM, and with the magnitude of the terms decreasing, stop when the next term is less than one "unit in the last place" of the current sum.

For a slightly more accurate SUM, hold off on including the first term; add it at the end. after accumulating the rest.

For MULTIPLY, will the terms become exactly 1.0? If so, then that is an easy stopping point.

In both cases, you could stop when the sum or product does not change the accumulated value.

Your comment about "target sum were about 1" is overkill; what I say above relaxes that requirement. But -- beware of under/overflow. Hopefully you 'know' that the result and the intermediate sums will come nowhere near infinity or the other extreme.

My use of "magnitude" allows for oscillating series (eg, sine). But there are series that do converge in spite of the individual terms oscillating wildly. They can create bad round off errors -- because the intermediate sums are bigger than the result.

In particular, for evaluating sine, first do "range reduction" to map the problem into a range of [-pi/4, +pi/4] (for many trig functions). This makes x - x^3/3! + ... very stable. WIthout the range reduction, the terms will oscillate wildly for a large value of x. (Note: range reduction may also turn the sine into cosine and/or change the sign of the result.)

Beware of this 'simple' series: 1 + 1/2 + 1/3 + 1/4 + 1/5 + ...

Would you like to discuss a particular series?

Rick James
  • 135,179
  • 13
  • 127
  • 222
  • I'm studying Lentz's algorithms for continued fractions. It seems that, in practice, people use small ad-hoc epsilons and don't have problems, but I'm trying to understand why. – Corbin Oct 10 '20 at 17:41
  • 1
    @Corbin - IIRC, `(sqrt(5) + 1)/2` ~= 1.618 is a worst-case for continued fractions. Experiment with different eps and check the precision of the result. – Rick James Oct 10 '20 at 23:07