The problem with that function, as described are the non-tail recursive calls. This means that the recursivity involved here needs a stack to work (in your sample, it's the call stack). In other words, that function is roughly the equivalent of:
import scala.collection.mutable.Stack
def fibonacci(n: Int): BigInt = {
var result = BigInt(0)
val stack = Stack.empty[Int]
stack.push(n)
while (stack.nonEmpty) {
val x = stack.pop()
if (x == 1) {
result += 1
}
else if (x > 1) {
stack.push(x - 2)
stack.push(x - 1)
}
}
result
}
As you can see, that's not very efficient, is it? On each iteration, the stack's size grows by one and because you can view the calls being made as a tree, that would be a proper binary tree whose size depends on N
and the number of leaves on it approximately 2N (actually less but constant factors don't matter when N is big), so we are talking about O(2N) time complexity and O(n) memory complexity (i.e. needed stack size is N
). Now that's exponential growth for time and linear growth for the memory used. What that means is that it takes a loooong time to process and it uses more memory than it should. Btw, it's a good idea as a software developer to reason in terms of Big O notation, because that's the first thing you need to look at when talking about performance or memory consumption.
Thankfully, for Fibonnaci we don't need that recursion. Here's a more efficient implementation:
def fibonacci(n: Int): BigInt = {
var a = BigInt(0)
var b = BigInt(1)
var idx = 0
while (idx < n) {
val tmp = a
a = b
b = tmp + a
idx += 1
}
a
}
This is just a plain loop. It doesn't need a stack to work. The memory complexity is O(1)
(meaning it needs a constant amount of memory to work, independent of input). In terms of time this algorithm is O(n)
, roughly meaning that to process the result a loop involving N
iterations is involved, so the growth in time does depend on input N
, but is linear and not exponential.
In Scala, you can also describe this as a tail recursion:
import annotation.tailrec
def fibonacci(n: Int): BigInt = {
@tailrec
def loop(a: BigInt, b: BigInt, idx: Int = 0): BigInt =
if (idx < n)
loop(b, a + b, idx + 1)
else
a
loop(0, 1)
}
This loop is described as a recursive function, however because that's a "tail-recursive call", the compiler rewrites this function to a simple loop. You can also see the presence of the @tailrec
annotation. It's not strictly necessary, the compiler will optimize that into a loop without it, but if you use this annotation, the compiler will error if the described function is not a tail-recursion - which is nice, because it's easy to make mistakes when relying on tail-recursion to work (i.e. you make a change and without noticing, bam, the function is not a tail-recursion anymore). Use this annotation, because the compiler can protect you if you do.
So in this case you work with immutable things (no more vars specified), yet it will have the same performance characteristics as the while loop. Which version you prefer, that's up to your preferences. I prefer the later, because it's easier for me to spot invariants and exit conditions, plus I like immutability, but other people prefer the former. And about the idiomatic way of doing this, you can also get fancy with a lazy Stream
or Iterable
, but nobody in the FP department will complain about tail recursions :-)