2

I have written a Java program to calculate the square root of a user-defined number using Newton's method. The main operations of the algo goes like that:

answer = guess - ((guess * guess - inputNumber) / (2 * guess)); 
while (Math.abs(answer * answer - inputNumber) > leniency) {
    guess = answer;
    answer = guess - ((guess * guess - inputNumber) / (2 * guess));
}

I'm now seeking to find the complexity of the algorithm (yup it's homework), and have read up from here that the time complexity of Newton's method is O(log(n) * F(x)).

However, from the above code snippet, I have interpreted the time complexity to be:

O(1+ ∑(1 to n) (1) ) = O(1+n) = O(n)

Not sure what I'm getting wrong here, but I can't seem to understand the disparity in big Os even after reading wiki's explanation.

Also, I am assuming that "complexity of algorithm" is synonymous to "time complexity". Is it right to do so?

Would really appreciate help in explaining this paradox, as I'm a newbie student with a few 'touch and go' programming modules worth of background.

Thanks in advance :)

Community
  • 1
  • 1
levicorpus
  • 38
  • 5
  • For larger and larger numbers, the number of iterations required to approach the answer within `leniency` increases, but nonlinearly. – Wug Nov 04 '12 at 18:06
  • @Wug thank you for your insights. i am assuming that the complexity of iterations with leniency is log(leniency), as in jpalecek's answer too? pls correct me if i'm wrong. i am still not quite getting the concept behind O(log(n) * F(x)) though. could anyone enlighten me about the 'big picture' and F(x)? sorry i'm kinda slow but i really want to understand. thanks in advance – levicorpus Nov 05 '12 at 15:54

1 Answers1

1

The problem is that you actually know nothing about n in your calculation - you don't say what it should be. When you calculate the actual error of the next iteration of the algorithm (do it!), you'll see that eg. if a is at least 1 and error is less than 1, you basically double the number of valid places every iteration. So to get p decimal places, you have to perform log(p) iterations.

jpalecek
  • 47,058
  • 7
  • 102
  • 144
  • hi jpalecek, thank you for your answer :) i've tried calculating the actual errors, and it does seem that when the error is less than 1, the number of significant figures double every time. your point about log(p) iterations for p decimal places: will it be right to say that because the search space is halved every iteration, the complexity w.r.t p is thus log(p)? – levicorpus Nov 05 '12 at 15:46
  • @levicorpus: I wouldn't say that. I'm not sure what the "search space" in this case would be, and if we took it that there are `2^p` intervals between eg. 1 and 2 that can approximate a number with `p` bits precision, and we search the answer among those, it would actually mean the algorithm is actually doing much better than just halving the search space each step. – jpalecek Nov 09 '12 at 17:18