0

Learning from MIT Opencourseware's algorithms course, a professor talks about powering a number and its time complexity.

x^n simply is computed as x*x*x...... n times (imagine a simple for loop with a multiplication being performed inside it)

He states that the time complexity of this approach is theta(n).

Here is my analysis:

Let the N(x) be a function that gives the number of digits in x. Then, complexity of :

x*1 = N(x)

x*x = N(x)*N(x)

x*x*x = N(x^2) * N(X)

x*x*x*x = N(x^3) * N(x)

and so on......

To sum up, T(x^n) = N(x) + N(x)*N(x) + N(x^2)*N(x) + N(x^3)*N(x) + ........... N(x^(n-1))*N(x)

T(x^n) = N(x)[1 + N(x) + N(x^2) + N(x^3) + ....... N(x^n-1)]

However, i can't solve any further. How does it yield theta(n) ultimately?

user720694
  • 2,035
  • 6
  • 35
  • 57
  • I'm assuming that n is not a floating point integer. – user720694 Dec 29 '13 at 18:18
  • Is "floating point integer" an integer or a floating point number? X^n for integers should just be x*x*x... If you are using a floating point number (e.g. pow(...)), I think you have to use logarithms for your analysis. This is assuming your doing all this in integer/float math and not finite fields (and other stuff I generally know not of). – FuzzyBunnySlippers Dec 29 '13 at 19:04

3 Answers3

3

Think of it like this.

If you conisder multiplication between two numbers to be an operation that takes unit time. Then the complexity of a 2 number multiplication is done in theta(1) time. Now, in a for loop which runs for n-1 times for n numbers. You apply this operation n-1 times. So the theta(1) cost operation happens N-1 times which makes the overall cost of the operation theta(n-1) which in asymptotic terms is theta(n)

The multiplication happens like this

  • x=x
  • x^2 = x*x
  • x^3 = (x^2)*x
  • x^4 = (x^3)*x
  • ................
  • .................
  • .................
  • x^(n-1) =(x^(n-2))*x
  • x^n = (x^(n-1))*x

It's theta(1) for each step as you can use the result of a previous step to calculate the overall product. For example, when you caculate x^2. You can store the value of x^2 and use it while calculating x^3. Similarly when you calculate x^4 you can use the stored value of x^3.

Now all the individual operations take theta(1) time. If you do it n times, the total time is theta(n). Now for calculating the complexity of x^n.

  • for x^2, T(2) = theta(1)
    This is the base case for our induction.
  • Let us assume for x^k, T(k) = theta(k) to be true
  • x^(k+1) = (x^k)*x, T(k+1)= theta(k)+theta(1)

Hence, for x^n, time complexity is T(n) = theta(N)

and if you want to sum up the complexity. You are summing it up wrong.

We know that T(2) = theta(1), time complexity of multiplying two numbers.

  • T(n) = T(n-1)+T(2) (time complexity of multiplying two numbers and time complexity of multiplying (n-1) numbers)
  • T(n) = T(n-2)+T(2)+T(2)
  • T(n) = T(n-3)+T(2)+T(2)+T(2)
  • ...................
  • ...................
  • T(n) = T(3) + (n-3)*T(2)
  • T(n) = T(2) + (n-2)*T(2)
  • T(n) = (n-1)*T(2)
  • T(n) = (n-1)*theta(1)
  • T(n) = theta(n)

As you know C an example of how you will write a power(naive) function.

 int power(int x,int n)
 {
     int powerVal=1;
     for(int i=1;i<=n;++i)
     {
          powerVal=powerVal*x;
     }
     return powerVal;
 }

Now, as you can see each time multiplication of two integer takes place and that takes only theta(1) time. You run this loop n times. so total complexity is theta(n)

erosenin
  • 1,052
  • 10
  • 22
  • I understand that doing individual operations n times will amount to theta(n). My question is rather simple, as stated in the comment in reply to Niet the Dark Absol. As we compute x^n starting from 1, then 1*x then 1*x*x, and the digits in x, x^2, x^3.... keep increasing, won't the time complexity be affected? – user720694 Dec 29 '13 at 18:40
  • Just reread your comment. Yes, they will be affected. But for simplicity's sake we consider the multiplication to be a single operation costing us theta(1) time. No matter how large the number. – erosenin Dec 29 '13 at 19:03
  • Which programming language do you know? – erosenin Dec 29 '13 at 19:04
  • I mostly use C/C++/Java. – user720694 Dec 29 '13 at 19:05
  • In real world computations, the time will increase if the number becomes big. In C/C++ you will never see the increase as you are multiplying fixed length integers/long integers which in hardware implementation will take the same time for any number that fits in them. However you can see the time increase by multiplying two big integers in python compared to integers. Though while calculating time complexity for this algortihm we ignore that. – erosenin Dec 29 '13 at 19:26
1

You're waaaaaay off-track.

Multiplication is a single operation.

You are applying this operation n times.

Therefore, O(1*n), which is O(n).

Done.

Niet the Dark Absol
  • 320,036
  • 81
  • 464
  • 592
  • 1
    Pardon my stupidity, but how is multiplication a single operation? I mean as we compute x^n starting from 1, then 1*x then 1*x*x, and the digits in x, x^2, x^3.... keep increasing, won't the time complexity be affected? – user720694 Dec 29 '13 at 18:26
  • They must be doing that in some finite field (modulo a prime number) – anatolyg Dec 29 '13 at 18:35
0

If you're looking for a best algorithm to compute power of a given number, it's not the best solution. Indeed, power of a number is not computed as you said, this method gives a complexity o(n) because you're applying the same operation n times X*X*...*X. The algorithm below gives a complexity o(log n):

pow(x,n)
{
  R=1; X=x; N=n;
   while (N > 0)
   {
      if N mod 2=1  R= R*X
      N= N div 2
      X= X · X
   }
   return R
}
blackbishop
  • 30,945
  • 11
  • 55
  • 76