6

Haskell pattern matching is often head strict, for example,f (x:xs) = ... requires input list to be evaluated to (thunk : thunk). But sometimes such evaluation is not needed and function can afford to be non-strict on some arguments, for example f (x:xs) = 3.

Ideally, in such situations we could avoid evaluating arguments to get the behaviour of const 3, which could be done with irrefutable pattern: f ~(x:xs) = 3. This gives us performance benefits and greater error tolerance.

My question is: Does GHC already implement such transformations via some kind of strictness analysis? Appreciate it if you could also point me to some readings on it.

Jun Xu
  • 161
  • 1
  • 4
    I don't know for certain but I'm pretty sure it does not try to do that. Irrefutable patterns exist for a reason. that reason being the intended difference in semantics between the two definitions you showed, just as you've described them. AFAIK strictness analysis seeks to *reduce* laziness and perform the unavoidable computations earlier, if it can be proved it won't change the semantics. But these are just hunches and impressions that I have. Perhaps a close reading of [the Report](https://www.haskell.org/onlinereport/haskell2010/) will provide more definitive answers. – Will Ness Sep 15 '18 at 12:48
  • If we bring this reasoning to its extreme, we should also "optimize" `seq x y` to `y`, which would defeat the purpose of `seq`. A strict pattern, in principle, could be used as a special case of `seq`. – chi Sep 15 '18 at 17:58
  • The ~ changes the semantics of the function, so ghc doesn’t do that. – augustss Sep 16 '18 at 00:38

1 Answers1

5

As far as I know, GHC will never make something more lazy than specified by the programmer, even if it can prove that doesn't change the semantics of the term. I don't think there is any fundamental reason to avoid changing the laziness of a term when we can prove the semantics don't change; I suspect it's more of an empirical observation that we don't know of any situations where that would be a really great idea. (And if a transformation would change the semantics, I would consider it a bug for GHC to make that change.)

There is only one possible exception that comes to mind, the so-called "full laziness" transformation, described well on the wiki. In short, GHC will translate

\a b -> let c = {- something that doesn't mention b -} in d

to

\a -> let c = {- same thing as before -} in \b -> d

to avoid recomputing c each time the argument is applied to a new b. But it seems to me that this transformation is more about memoization than about laziness: the two terms above appear to me to have the same (denotational) semantics wrt laziness/strictness, and are only operationally different.

Daniel Wagner
  • 145,880
  • 9
  • 220
  • 380
  • 3
    It's worth noting that the suggested change in the original question *does* change semantics wrt laziness - it removes some possible bottoms. Sometimes you want very precise control over what bottoms can result from code, and the compiler switching that up on you is awkward. – Carl Sep 15 '18 at 14:25