4

Suppose you're writing a program that searches an exponentially large or infinite space: gameplaying, theorem proving, optimization etc, anything where you can't search the entire space, and the quality of results depends heavily on choosing which parts of it to search within available resources.

In an eager language, this is conceptually straightforward: the language lets you specify order of evaluation, and you use that to control what parts of the search space to evaluate first. (In practice, it tends to get messy and complicated because your code layout for inference control gets mixed in with the problem definition, which is one of the reasons I'm interested in ways to do this in a lazy language instead. But it is conceptually straightforward.)

In a lazy language like Haskell, you can't do it that way. I can think of two ways of doing it instead:

  1. Write code that depends on the exact order of evaluation that happens to be chosen by the current version of the compiler you are using, with the optimization flags you are using, so that stuff ends up happening in just the right order. This seems likely to lead to maintainability issues.

  2. Write code that writes code, specifically, write code that transforms the problem definition together with a set of heuristics, into a sequence of instructions in an eager language, that specifies the exact order in which things should be done. This seems to have merit, if you're willing to pay the upfront costs.

Are there other recommended ways to do this sort of thing?

rwallace
  • 31,405
  • 40
  • 123
  • 242
  • 1
    You would probably get more insightful answers if you provided a concrete example. – Dan Burton Jul 09 '11 at 17:30
  • 2
    You might be interested in looking at [this Haskell implementation of A*](http://www.haskell.org/haskellwiki/Haskell_Quiz/Astar/Solution_Dolio). Also, [Data.Graph.AStar](http://hackage.haskell.org/packages/archive/astar/0.2.1/doc/html/Data-Graph-AStar.html) – Dan Burton Jul 09 '11 at 17:33

2 Answers2

5

The typical way of doing this in a lazy language is to define the search space as a (possibly infinite) data structure and then write whatever strategy you wish to use to traverse this structure separately. This way, you're in control of the strategy used, but it's kept separate from the problem definition.

hammar
  • 138,522
  • 17
  • 304
  • 385
  • Okay, the first part I think I can see how to do, but how do you write a strategy to traverse the structure in a particular order? – rwallace Jul 09 '11 at 16:35
  • For example, let's say the structure is a game tree. You'd then traverse it recursively and use some heuristic at each step to determine whether to keep going down this subtree or to abandon it. – hammar Jul 09 '11 at 16:39
  • Right, but at that stage, doesn't _keep going down_ mean _evaluate_ (in the sense the language means by evaluate)? Does that bring you back to writing code that depends on the exact decisions made by the language about what gets evaluated when? – rwallace Jul 09 '11 at 16:52
  • In Haskell, you control the order of evaluation through dependencies. If the result of your function depends on the result of a subexpression, then the subexpression will be evaluated whenever the result of the function needs to be evaluated. So it's not like your evaluation order will depend on the compiler version; it's specified as a part of the language. – hammar Jul 09 '11 at 17:00
  • Certainly you can and must depend on evaluation no later than necessary. Is it customary to depend on evaluation no sooner than necessary, i.e. to depend on the exact order of lazy evaluation in the sense that a program in another language would depend on the exact order of strict evaluation? – rwallace Jul 09 '11 at 17:17
  • My remarks about compiler versions were because I was under the impression that an optimizing Haskell compiler often uses eager evaluation to save overhead, in cases where it can prove the computation will terminate. Is this not the case? – rwallace Jul 09 '11 at 17:19
  • I think we might be talking past each other about evaluation order. It's not defined which of `a` and `b` will be evaluated first in `a + b`, but it's an important part of the language semantics that neither are evaluated unless there is a demand for the result. The compiler will have to prove that the semantics are unchanged before itnroducing any eager evaluation. – hammar Jul 09 '11 at 17:41
  • 1
    @rwallace: Say you have a game tree, each node being a game state with branches for each possible move from that state. At a given node, you can apply a heuristic to decide whether this branch is worth examining further. If it's not, return an empty result. If it is, do the same thing for its child nodes, then take the best result from those. Doing this will evaluate only the portions of the game tree the heuristic considers worthwhile. – C. A. McCann Jul 09 '11 at 19:17
  • @rwallace: Things are slightly more complicated if you want to limit the search in others ways--searching breadth-first or using a hard search depth limit, say--but the basic concept is the same. – C. A. McCann Jul 09 '11 at 19:19
1

You can make parts of your haskell code to be based on strict evaluation

http://www.haskell.org/haskellwiki/Performance/Strictness

Ankur
  • 33,367
  • 2
  • 46
  • 72