In the Coursera course - Functional Programming in Scala - Martin Odersky talks about how Imperative Programming is constrained by the von Neumann bottleneck because it deals largely with mutable state and also, therefore, assignment and de-referencing.
The von Neumann bottleneck being the latency involved in reading/writing data between the processor and memory.
I am struggling to understand 2 things and am hoping someone can help shed some light on it:-
If we only used immutable objects when writing a Scala program - then we still have assignment when we initialize an immutable object with data when we construct it, but not further re-assignment. When we want to de-reference an immutable object, then there will still be the chance that it no longer exists in cache and will have to be fetched again from main memory -> latency.
I'm struggling to understand how using immutable data structures helps with the von Neumann bottleneck. Can anyone help me appreciate the cases where it does?
In his course lecture, Martin Odersky states the following while talking about the von Neumann bottleneck:-
Imperative programming conceptualises programs word for word which becomes a problem for scalability since we are dealing with data structures at too low a level. Functional programming languages (in general) tend to focus on defining theories and techniques for working with higher-level abstractions such as
collections
,polynomials
,documents
etc.I understand that using higher-level abstractions can really help a developer scale the efficiency of their development work, but how do abstractions help address the von Neumann bottleneck?