The definition of convergence for root finding algorithms is given in a few sources as: A sequence ${x^k}$ generated by a numerical method is said to converge to the root $\alpha$ with order $p\geq 1$ if:
$\exists C > 0 : \frac{|x^{(k+1)} - \alpha |}{|x^{k} - \alpha|^p}\leq C \forall k \geq k_0$
Where the order is $\exists C$ s.t. $\forall k$ instead of $\forall C$ $\exists k$ as you would have for a regular sequence. Why the change and how does it even make sense? Surely all you're determining is that the method is bounded?
A method that was just oscillating within bounds such as $x_k = (-1)^k$ would be considered convergent by the above definition despite clearly not being so what gives?
I don't know if I've misunderstood something or it's just taken as read that an algorithm would be created sensibly so as to not exhibit behaviour like this but I just don't understand why the statement is like this when the normal sequence definition would surely work perfectly well here too.
Hope that makes sense, thanks in advance for any help.