0

The definition of convergence for root finding algorithms is given in a few sources as: A sequence ${x^k}$ generated by a numerical method is said to converge to the root $\alpha$ with order $p\geq 1$ if:

$\exists C > 0 : \frac{|x^{(k+1)} - \alpha |}{|x^{k} - \alpha|^p}\leq C \forall k \geq k_0$

Where the order is $\exists C$ s.t. $\forall k$ instead of $\forall C$ $\exists k$ as you would have for a regular sequence. Why the change and how does it even make sense? Surely all you're determining is that the method is bounded?

A method that was just oscillating within bounds such as $x_k = (-1)^k$ would be considered convergent by the above definition despite clearly not being so what gives?

I don't know if I've misunderstood something or it's just taken as read that an algorithm would be created sensibly so as to not exhibit behaviour like this but I just don't understand why the statement is like this when the normal sequence definition would surely work perfectly well here too.

Hope that makes sense, thanks in advance for any help.

1 Answers1

-1

I'd like to see one of your sources because I feel like there is something missing in the definition that you provided. Such as C < 1.

But IF the sequence converges, THEN the condition you've provided looks like it says a lot about the SPEED of convergence. And it does so in terms of the behavior of an iteration of the algorithm. Faster converging algorithms are good because you will need fewer steps to achieve fixed precision. And therefore you need a way to talk about that speed of convergence, which the standard definition of convergence does not allow.

btilly
  • 43,296
  • 3
  • 59
  • 88