In the streams 2 video of SICP, Abelson gives an example of using an analog computer solving differential equations. He then programs this in Scheme, using lazy evaluation to get around a circular definition dependency.
The problem with this technique, he says, is that when you design more complicated programs, you end up with delayed expressions everywhere that make it difficult to understand. To solve the problem elegantly, he says, you must make the entire language lazy at the price of some expressiveness, namely the dragging tail problem.
This is the approach taken by Miranda and Haskell. In Haskell I have found that it is difficult to reason about big O complexity, and it is easy to write programs that consume far too much memory and time.
I once spoke to Robert Harper about this problem, and he disagrees that you have to make your entire language lazy to make it elegant, and that this is a design flaw in Haskell. How specifically would one go about making a language partially lazy to solve this problem? Are there examples of such languages? I'd like to learn more about functional languages, but banning side effects and eager evaluation everywhere, including I/O, makes things a bit... counter-intuitive.