14

I have seen multiple references to the Church Rosser theorem, and in particular the diamond property diagram, while learning functional programming but I have not come across a great code example.

If a language like Haskell can be viewed as a kind of lambda calculus then it must be possible to drum up some examples using the language itself.

I would give bonus points if the example easily showed how the steps or reductions lead to easily parallelizable execution.

user1411349
  • 142
  • 7
  • 2
    I'm not familiar with that theorem, but at a first look it seems like its more useful from the theoretical point of view rather than to actually write code. It's among the "good" properties a rewrite system may have, to ensure confluence. A stupid example may be (I'm not sure) the expression `(5-1)*(1+1)`: you may either start by simplifying `5-1` or `1+1`, but you end up to the same result. – Riccardo T. May 23 '12 at 23:12
  • This explains the notion of parallel reduction: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.42.5081 Section 4, page 12 – Riccardo T. May 23 '12 at 23:28

1 Answers1

17

All this theorem states is that an expression that can be reduced via multiple paths necessarily will be further reducible to a common product.

For example, take this piece of Haskell code:

vecLenSq :: Float -> Float -> Float
vecLenSq x y =
  xsq + ysq
  where
    xsq = x * x
    ysq = y * y

In Lambda Calculus, this function is roughly equivalent to (parens added for clarity, operators assumed primitive):

λ x . (λ y . (λ xsq . (λ ysq . (xsq + ysq)) (y * y)) (x * x))

The expression can be reduced by first applying a β reduction to xsq or by applying a β reduction to ysq, i.e. the "order of evaluation" is arbitrary. One can reduce the expression in the following order:

λ x . (λ y . (λ xsq . (xsq + (y * y))) (x * x))
λ x . (λ y . ((x * x) + (y * y)))

... or in the following order:

λ x . (λ y . (λ ysq . ((x * x) + ysq)) (y * y))
λ x . (λ y . ((x * x) + (y * y)))

The result is evidently the same.

This means that the terms xsq and ysq are independently reducible, and that their reductions may be parallelized. And indeed, one could parallelize the reductions like so in Haskell:

vecLenSq :: Float -> Float -> Float
vecLenSq x y =
  (xsq `par` ysq) `pseq` xsq + ysq
  where
    xsq = x * x
    ysq = y * y

This parallelization would in reality not offer an advantage in this particular situation, since two simple float multiplications executed in sequence are more efficient than two paralellized multiplications because of scheduling overhead, but it might be worthwhile for more complex operations.

dflemstr
  • 25,947
  • 5
  • 70
  • 105
  • 1
    Actually, two float multiplications can be parallelised very well on modern e.g. x86 processors, using a single SSE instruction. But that's not the kind of parallelism you get with `par`. – leftaroundabout May 24 '12 at 12:20