4

Consider following recursive function.

int f(int n){
   if(n<=0) return 1; return f(n-1)+f(n-1);
}

It is said that though there are 2^N (2 powered to N) calls but only N calls exist at a given time. Can someone explain how?

Diogo Rocha
  • 9,759
  • 4
  • 48
  • 52
Milin
  • 183
  • 13
  • But just for the sake of completeness: Space complexity of the described algorithm is still O(N^2), not O(N) – Ctx Feb 06 '16 at 13:57
  • Is it because memory from left recursive calls may not have been cleaned when right recursive calls occupy space or is there any other explanation? – Milin Feb 06 '16 at 22:13
  • It is because your return value is growing with 2^N. To store this value, you need at least O(N) of space. And since you need to store O(N) values of it, the total complexity is O(N^2). – Ctx Feb 07 '16 at 03:34

2 Answers2

2

Sure. The complexity part is in the else portion:

return f(n-1)+f(n-1)

This spawns two calls at n-1 for each call at n. However, note the order of evaluation:

call f(n-1) ... left side
save the value as temp1
call f(n-1) ... right side
add the value to temp1
return that value

The entire left-side call tree has to complete before the right-side call tree can begin -- and that happens at every depth level. For each value of n, there will be only one call active at any point in time. For instance, f(2) gives us a sequence like this:

call f(2)
n isn't <= 0 ...
    call f(1) ... left
    n isn't <= 0 ...
        call f(0) ... left | left
        n <= 0; return 1
        save the 1
        call f(0) ... left | right
        n <=0; return 1
        add the 1 to the temp value;
        return 2
    save the 2
    call f(1) ... right
        call f(0) ... left | left
        n <= 0; return 1
        save the 1
        call f(0) ... left | right
        n <=0; return 1
        add the 1 to the temp value;
        return 2
    add this 2 to the temp value, making 4
    return the 4
done ...

At each point, there are no more than n+1 calls active (see indentation level), although we eventually execute 2^(n+1)-1 calls. We're off by one (n+1 instead of n) because we terminate at 0 instead of 1.

Does that clear things up?

Prune
  • 76,765
  • 14
  • 60
  • 81
  • Perfect Explanation. Thanks. Appreciate the effort. – Milin Feb 05 '16 at 01:04
  • Glad to be of help. @javad did a nice job, too. Remember to "accept" an answer (eventually) so StackOverflow can retire it gracefully. – Prune Feb 05 '16 at 01:06
1

For each n, f(n-1) is called twice. So if you draw a tree with n at root and add a branch for each call to f(n-1) and so on, you will see that you have a full binary tree of height n, which has 2^n nodes (total number of calls). But it is important to note that the second call to f(n-1) cannot be initiated without first completing the first call and returning the result. The same holds recursively for f(n-2) calls inside f(n-1), ... .

The worst case (I assume that is what you are looking for), happens when the set of your function calls reach f(0) and you are at one of the leaves of the call tree. What you have in your call stack is a set of function calls starting from f(n-1), f(n-2), ... , f(1), f(0). So at each point of time you have at most O(n) functions in your stack. (Which is equal to the height of the call tree).

javad
  • 498
  • 4
  • 15