2

Based on my personal experience I would suppose, the answer is no. Take mutable arrays, for instance. It seems, that there are certain problems that can be most efficiently solved by using mutable data structures with random access, i.e. arrays. You could use a different data structure like a dictionary (map in F#), that is compatible with a functional approach, but than you won't have random access and the efficiency of your algorithm will be worse by the factor log n. In some rather rare cases it might also be not only convenient but also more efficient to use aliasing for being able to access the same data structure in different ways. But aliasing is also not compatible with the immutability of the funcitonal approach.

Examples like this would suggest, that the pure functional approach can't be in all cases as efficient as the imperative one. Of course I know, that functional languages are Turing-complete, but I don't remember the proof well enough to judge, if it also tells us something about time and space complexity. So what do we know about this from a theoretical perspective? Is my assumption right?

2 Answers2

0

Short answer is yes. Functional data structures are less effective:

  • Map and Set are trees and rely on comparison, not hashcode. Hence log(n)
  • List does provide you same API as mutable array, but it's a linked list. Hence log(n) for random access.

However, it's a known trade off for an immutability - it's cheaper to "replace" 1 element in immutable linked list, since you have to update only 1 element and reuse the rest of it, while if you want an immutable array, you'll have to fully copy it.

So FP is more about stable code, predictable behavior and high development speed. However, most parts of system aren't critical in terms of performance, so you wouldn't really notice the resource consumption difference between the two, while critical parts can be definitely optimized to use mutable structures.

Since you tagged it, F# is a good language to do so, since it has nice support for both FP and OOP.

kagetoki
  • 4,339
  • 3
  • 14
  • 17
0

Pippenger showed that

  1. There is at most a log(n) multiplicative overhead (in a Big-O sense) for an impure program compared to a pure one that operates in n steps.
  2. There do exist certain problems for which that log(n) factor is in fact necessary.

However, there are some problems of interest where the log(n) factor is not necessary, and also keep in mind that constant factors may be important in real world use cases, too.

kvb
  • 54,864
  • 2
  • 91
  • 133