1

What is the time complexity for the following function?

    for(int i = 0; i < a.size; i++) {
        for(int j = i; j < a.size; i++) {
            //
        }
    }

I think it is less than big O n^2 because we arent iterating over all of the elements in the second for loop. I believe the time complexity comes out to be something like this:

n[ (n) + (n-1) + (n-2) + ... + (n-n) ]

But when I solve this formula it comes out to be

n^2 - n + n^2 - 2n + n^2 - 3n + ... + n^2 - n^2

Which doesn't seem correct at all. Can somebody tell me exactly how to solve this problem, and where I am wrong.

user2158382
  • 4,430
  • 12
  • 55
  • 97
  • 1
    The outer n is wrong as you have already enumerated all items. Furthermore, the inner loop goes from i to n-1 and not from i+1 to n-1, so for the first outer iteration it’s n and not n-1. – Gumbo May 13 '14 at 05:23
  • Why would I not need the outter loop? Wouldn't the inner loop `j` run for each outter loop `i`? – user2158382 May 13 '14 at 08:18
  • @user2158382, He is not suggesting that you don't need the other loop. He is saying that your computation of the time complexity should not have the outer `n`. – merlin2011 May 13 '14 at 08:19

3 Answers3

5

That is O(n^2). If you consider the iteration where i = a.size() - 1, and you work your way backwards (i = a.size() - 2, i = a.size - 3, etc), you are looking at the following sum of number of iterations, where n = a.size.

1 + 2 + 3 + 4 + ... + n

The sum of this series is n(n+1)/2, which is O(n^2). Note that big-O notation ignores constants and takes the highest polynomial power when it is applied to a polynomial function.

merlin2011
  • 71,677
  • 44
  • 195
  • 329
  • I don't understand how the loops run for `1 + 2 + 3 + 4 + ... + n`. I believe it would run for `1 + 2 + 3 + 4 + ... + n` for every iteration of i? Because for every iteration of `i`, `j` would be run `1 + 2 + 3 + 4 + ... + n`. – user2158382 May 13 '14 at 08:15
  • 1
    @user2158382, Since you did not tell us what is inside the inner loop, all the answers we have written here assume that the operation inside the inner loop takes constant time, and so we simply count the **number of times the inner loop runs**. As I mentioned in the answer, consider what happens at the iteration `i = a.size() - 1`. In this iteration, the inner loop will run exactly once. In the iteration `i = a.size() - 2`, the inner loop will run exactly twice. – merlin2011 May 13 '14 at 08:18
2

It will run for:

1 + 2 + 3 + .. + n

Which is 1/2 n(n+1) which give us O(n^2)

The Big-O notation will only keep the dominant term, neglecting constants too

The Big-O is only used to compare algorithms on the same variation of a problem using the same complexity analysis standard, if and only if the dominant terms are different.

If the dominant terms are the same, you need to compare Big-Theta or Time complexity, which will show minor differences.

enter image description here

Example

A

    for i = 1 .. n
      for j = i .. n
        ..

B

    for i = 1 .. n
      for j = 1 .. n
        ..

We have

Time(A) = 1/2 n(n+1) ~ O(n^2)

Time(B) = n^2 ~ O(n^2)

O(A) = O(B)

T(A) < T(B)

Analysis

To visualize how we got 1 + 2 + 3 + .. n:

    for i = 1 .. n:
      print "(1 + "
      sum = 0
      for j = i .. n:
        sum++
      print sum") + "

will print the following:

(1+n) + (1+(n-1)) + .. + (1+3) + (1+2) + (1+1) + (1+0)

n+1 + n + n-1 + .. + 3 + 2 + 1

1 + 2 + 3 + .. + n + n+1

1/2 n(n+1) + (n+1)

1/2 n^2 + 1/2 n + n + 1

1/2 n^2 + 3/2 n + 1
Khaled.K
  • 5,828
  • 1
  • 33
  • 51
1

Yes, the number of iterations is strictly less than n^2, but it's still Θ(n^2). It will eventually be greater than n^k for any k<2, and it will eventually be less than n^k for any k>2.

(As a side note, computer scientists often say big-O when they really mean big-theta (Θ). It's technically correct to say that almost every algorithm you've seen has O(n!) running time; all reasonably algorithms have running times that grow no more quickly than n!. But it's not really useful to say that the complexity is O(n!) if it's also O(n log n), so by some kind of Gricean maxim we assume that when someone says an algorithm's complexiy is O(f(x)) that f(x) is as small as possible.)

hobbs
  • 223,387
  • 19
  • 210
  • 288