3

In the Coursera course - Functional Programming in Scala - Martin Odersky talks about how Imperative Programming is constrained by the von Neumann bottleneck because it deals largely with mutable state and also, therefore, assignment and de-referencing.

The von Neumann bottleneck being the latency involved in reading/writing data between the processor and memory.

I am struggling to understand 2 things and am hoping someone can help shed some light on it:-

  1. If we only used immutable objects when writing a Scala program - then we still have assignment when we initialize an immutable object with data when we construct it, but not further re-assignment. When we want to de-reference an immutable object, then there will still be the chance that it no longer exists in cache and will have to be fetched again from main memory -> latency.

    I'm struggling to understand how using immutable data structures helps with the von Neumann bottleneck. Can anyone help me appreciate the cases where it does?

  2. In his course lecture, Martin Odersky states the following while talking about the von Neumann bottleneck:-

    Imperative programming conceptualises programs word for word which becomes a problem for scalability since we are dealing with data structures at too low a level. Functional programming languages (in general) tend to focus on defining theories and techniques for working with higher-level abstractions such as collections, polynomials, documents etc.

    I understand that using higher-level abstractions can really help a developer scale the efficiency of their development work, but how do abstractions help address the von Neumann bottleneck?

  • 5
    Perhaps Odersky is talking about the *intellectual bottleneck* created by the von Neumann architecture, [as described by John Backus](https://en.wikipedia.org/wiki/Von_Neumann_architecture#Von_Neumann_bottleneck) "Not only is this tube a literal bottleneck for the data traffic of a problem, but, more importantly, it is an intellectual bottleneck that has kept us tied to word-at-a-time thinking instead of encouraging us to think in terms of the larger conceptual units of the task at hand. " – juanpa.arrivillaga May 11 '17 at 18:03
  • @juanpa.arrivillaga Thanks for the comment regarding point 2, Martin Odersky does mention John Backus so that is much clearer now. cheers – harry callahan May 11 '17 at 18:58

1 Answers1

1

You need to read the original paper published by John Backs "Can Programming Be liberated from the Von Neumann Style? A functional style and its algebra of programs". It basically talks about two kinds of bottlenecks one which is physical hardware limitations and the other which is a kind of conceptual bottleneck formed due to the way programmers think of languages. On your second question. As earlier languages were more closer to the respective hardware implementations the programmer thinking used to be mimic the sequential flow of events. Functional languages give us a new way to look at programs wherein parallel executions or operations on a set of data work.

On the first question I would like to iterate a comment from wiki.c2.com

"What does the choice of programming language have on the hardware? A functional language which is compiled to run on a von Neumann machine will still suffer the bottleneck." The answer, is ReferentialTransparency--which makes parallel computation much more tractable (and capable of being automated). Effectively parallelizing imperative languages is still an active research topic.

http://wiki.c2.com/?CanProgrammingBeLiberatedFromTheVonNeumannStyle

mayur
  • 11
  • 2