-1

I've got a general question about functional programming vs imperative programming. I am a hobby programmer implementing some engineering problems with C#. You always here a lot concerning the benefits of functional programming. Maybe I get it totally wrong, but I can't understand why functional programming does not result in waste of computation time compared to a carefully designed imperative program:

Suppose I have the following scenario, where two class properties are based on a heavy computation and a lightweight computation,respectively. Furthermore, suppose the lightweight result is dependent on the result of the heavy calculation, but is only necessary from time to time. A pseudo-C# implementation might look like this:

public class Class
{
    public double? HeavyComputationVariable { get; set; }
    public double LightComputationVariable { get; set; }

    void CalcHeavyComputation(double input)
    {
        //Some heavy time consuming computation here
        HeavyComputationVariable = resultOfHeavyComputation;
    }

    void CalcLightComputation()
    {
        if(HeavyComputationVariable == null) CalcHeavyComputation();

        //Some light computation
        LightComputationVariable = HeavyComputationVariable*resultOfLightComputation;
    }
}

So in this example when calling the lightweight computation, the heavy computation is only done, if not performed before. So the lightweight computation does not result in a recalculation of the complicated variable per se, but only if necessary.

As I understand functional programming I would implement a function for the complicated calcualtion and one for the simple one:

fHeavy (someInput) return complicated;
fSimple (fHeavy(someInput)) return simple*fHeavy;

Maybe, the example is not to well defined. But I hope one can understand the general question. How do I avoid intensive and unneccessary recalculation if not providing some imperative control to check, if the recalculation is really necessary.

Johannes
  • 161
  • 1
  • 6

2 Answers2

1

The nice thing with immutable values is that you don't even have to do that much manual work, since caching (more often called memoization in this context) is "transparent" -- you can cache everything without changing the behaviour of the program. In some languages, this is be done by the compiler.

E.g. in Scala, you can use "lazy vals"; these are transformed into functions of type () => A internally, which are called at the first invocation, and then memoized:

lazy val heavyComputation = calcHeavyComputation()
lazy val lightComputation = heavyComputation * somethingElse

And in Haskell, this is even the default behaviour (no special keyword -- every expression is lazy and memoized, unless you use some magic functions):

heavyComputation = runHeavyComputation ()
lightComputation = heavyComputation * somethingElse

In both cases, the values are actually implemented as thunks, not simple "objects". But since there is no mutability, this does not matter denotationally.

Of course, this only works if you stay within the realm of pure functions. With side effects, it gets difficult (although with in Scala, you can still get quite far without too many problems, if you know what you're doing).

phipsgabler
  • 20,535
  • 4
  • 40
  • 60
  • Thx. Do you know how F# handles this? I read about memoization, but as I understand it, I habe to implement it myself via a dictionary. I do not quite get how this is handled when F# is exposed as a static class to C#. Shouldn't the dictionaries collecting the already calculated states be deleted because of the static behaviour. Furthermore, is this not a little bit away from a purely immutable function since a mutable dictionary is used to store already calculated function calls? Also, does lazy evaluation in contrast to memoization really help if a third function depends on the heavy comput. – Johannes Feb 28 '17 at 22:00
  • @Johannes I don't know much about F#, but IIRC it is strict. However, I think that the [`Lazy` class](https://msdn.microsoft.com/en-us/library/dd642331(v=vs.110).aspx) should be able to do all this. As to the question about purity: sure, it is "cheating" in some sense, but it is invisible to the outside: there is no (pure) way to distinguish a lazy value from a strict one (given that the mutability is closed to the outside). – phipsgabler Mar 02 '17 at 13:58
  • Also, one should distinguish just "non-strict evaluation" (call only on demand) from "call by name" (call at most once). – phipsgabler Mar 02 '17 at 13:58
1

Red herring

Saving 100 ms in your program is not how functional programming saves you time.

Highly readable, reusable code that is easy to reason about and easy to debug saves you countless hours and probably thousands of dollars.


Steel-toed boots

So the lightweight computation does not result in a recalculation of the complicated variable per se, but only if necessary.

You can write bad imperative code and you can write bad functional code. A language won't save you from your own ignorance and stupidity.

To provide a concrete example, consider these two differing functional implementations of fibonacci – make sure to run them to visualize the amount of work each one does

const U = f => f(f)

const Y = U (h => f => f(x => h (h) (f) (x)))

const fib = Y (f => x => 
  (console.log('hard work', x),
    x < 2 ? x : f(x - 1) + f(x - 2)))

// tons of wasted work
console.log(fib (7)) // 13

const Ymem = (f, memo = new Map) => x =>
  memo.has(x) ? memo.get(x) : memo.set(x, f(y=> Ymem(f,memo)(y))(x)).get(x)

const fibmem = Ymem (f => x => 
  (console.log('hard work', x),
    x < 2 ? x : f(x - 1) + f(x - 2)))

// no work is duplicated
console.log(fibmem (7)) // 13

With very few exceptions I'm sure, every language is capable of expressing good code – but it's a doubled-edged sword: every language without exception is capable of expressing bad code

Mulan
  • 129,518
  • 31
  • 228
  • 259