All this theorem states is that an expression that can be reduced via multiple paths necessarily will be further reducible to a common product.
For example, take this piece of Haskell code:
vecLenSq :: Float -> Float -> Float
vecLenSq x y =
xsq + ysq
where
xsq = x * x
ysq = y * y
In Lambda Calculus, this function is roughly equivalent to (parens added for clarity, operators assumed primitive):
λ x . (λ y . (λ xsq . (λ ysq . (xsq + ysq)) (y * y)) (x * x))
The expression can be reduced by first applying a β reduction to xsq
or by applying a β reduction to ysq
, i.e. the "order of evaluation" is arbitrary. One can reduce the expression in the following order:
λ x . (λ y . (λ xsq . (xsq + (y * y))) (x * x))
λ x . (λ y . ((x * x) + (y * y)))
... or in the following order:
λ x . (λ y . (λ ysq . ((x * x) + ysq)) (y * y))
λ x . (λ y . ((x * x) + (y * y)))
The result is evidently the same.
This means that the terms xsq
and ysq
are independently reducible, and that their reductions may be parallelized. And indeed, one could parallelize the reductions like so in Haskell:
vecLenSq :: Float -> Float -> Float
vecLenSq x y =
(xsq `par` ysq) `pseq` xsq + ysq
where
xsq = x * x
ysq = y * y
This parallelization would in reality not offer an advantage in this particular situation, since two simple float multiplications executed in sequence are more efficient than two paralellized multiplications because of scheduling overhead, but it might be worthwhile for more complex operations.