6

So, I wanted to manually prove the Composition law for Maybe applicative which is:

u <*> (v <*> w) = pure (.) <*> u <*> v <*> w

I used these steps to prove it:

u <*> (v <*> w)          [Left hand side of the law]
  = (Just f) <*> (v <*> w)  [Assume u ~ Just f]
  = fmap f (v <*> w)
  = fmap f (Just g <*> w)   [Assume v ~ Just g]
  = fmap f (fmap g w)
  = fmap (f . g) w

pure (.) <*> u <*> v <*> w  [Right hand side of the law]
  = Just (.) <*> u <*> v <*> w
  = fmap (.) u <*> v <*> w
  = fmap (.) (Just f) <*> v <*> w  [Replacing u with Just f]
  = Just (f .) <*> v <*> w
  = Just (f .) <*> Just g <*> w    [Replacing v with Just g]
  = fmap (f .) (Just g) <*> w
  = Just (f . g) <*> w
  = fmap (f . g) w

Is proving like this correct? What really concerns me is that I assume u and v for some functions embedded in Just data constructor to proceed with my proof. Is that acceptable? Is there any better way to prove this?

dfeuer
  • 48,079
  • 5
  • 63
  • 167
Sibi
  • 47,472
  • 16
  • 95
  • 163
  • 11
    Sure it's correct to argue by cases. But you need to also check the `Nothing` case. And maybe the bottom case if you care about that. – Ørjan Johansen Jun 09 '14 at 21:21

4 Answers4

9

Applicative functor expressions are just function applications in the context of some functor. Hence:

pure f <*> pure a <*> pure b <*> pure c

-- is the same as:

pure (f a b c)

We want to prove that:

pure (.) <*> u <*> v <*> w == u <*> (v <*> w)

Consider:

u = pure f
v = pure g
w = pure x

Therefore, the left hand side is:

pure (.) <*> u <*> v <*> w

pure (.) <*> pure f <*> pure g <*> pure x

pure ((.) f g x)

pure ((f . g) x)

pure (f (g x))

pure f <*> pure (g x)

pure f <*> (pure g <*> pure x)

u <*> (v <*> w)

For Maybe we know that pure = Just. Hence if u, v and w are Just values then we know that the composition law holds.

However, what if any one of them is Nothing? We know that:

Nothing <*> _ = Nothing
_ <*> Nothing = Nothing

Hence if any one of them is Nothing then the entire expression becomes Nothing (except in the second case if the first argument is undefined) and since Nothing == Nothing the law still holds.

Finally, what about undefined (a.k.a. bottom) values? We know that:

(Just f) <*> (Just x) = Just (f x)

Hence the following expressions will make the program halt:

(Just f) <*> undefined
undefined <*> (Just x)
undefined <*> Nothing

However the following expression will result in Nothing:

Nothing <*> undefined

In either case the composition law still holds.

Aadit M Shah
  • 72,912
  • 30
  • 168
  • 299
  • 2
    According to the instance definition, `Nothing <*> _ = Nothing`. But how do you conclude that `_ <*> Nothing = Nothing` ? – Sibi Jun 10 '14 at 05:27
  • 1
    We know that `(<*>) :: Applicative f => f (a -> b) -> f a -> f b`. Hence if the second parameter was `Nothing` then we have 3 cases: 1) `Nothing <*> Nothing` which is `Nothing`. 2) `Just f <*> Nothing` 3) `undefined <*> Nothing`. The second case evaluates to `Nothing` because there's no way to get a value of type `a` from `Nothing` and apply it to `f (a -> b)` to get a value of type `f b`. The only logical solution is to return `Nothing`. In the third case if we try to evaluate the first argument (i.e. `undefined`) then the program would halt which is wrong. Correct: `_ <*> Nothing = Nothing`. – Aadit M Shah Jun 10 '14 at 06:00
  • Thanks, what does the bottom case has to do here ? You have shown that bottom cases at some situations will make the program halt, so how does that help in proving the laws ? – Sibi Jun 10 '14 at 07:49
  • 1
    Also, at the end of the answer you have written that `undefined <*> Nothing` will result in `Nothing` which is not the case (which you have also mentioned in the previous comment in this thread.) – Sibi Jun 10 '14 at 07:50
  • Also bottom is not about halting. In fact an infinite loop is a prominent form of bottom. – Ørjan Johansen Jun 10 '14 at 07:57
  • I edited my answer explaining why the implementation differs from what is expected. Granted that `undefined <*> Nothing` makes the program halt instead of returning `Nothing`. However that should not be the case. This is a problem caused because `Applicative` is not yet a superclass of `Monad`. – Aadit M Shah Jun 10 '14 at 07:58
  • @ØrjanJohansen I know that bottom is not about halting. However in the case of a pattern match, it will make the program halt. Trying to evaluate a bottom value (even a little) will make the program halt. In the case of an infinite loop the bottom type represents an absence of data. However the program doesn't halt because you never evaluate a bottom value. – Aadit M Shah Jun 10 '14 at 08:00
  • 3
    @AaditMShah There is in ordinary Haskell (without unsafe functions that don't preserve the common semantics of bottom anyhow) no way to define a non-constant function such that both `Nothing <*> undefined` and `undefined <*> Nothing` give `Nothing`. Whether it is defined through `Monad`, `Applicative` or directly makes no difference. – Ørjan Johansen Jun 10 '14 at 08:01
  • @Sibi If the program halts then you cannot disprove the law. Hence by negative hypotheses you conclude that the law is true. It's an edge case. – Aadit M Shah Jun 10 '14 at 08:03
  • @AaditMShah The idea that an infinite loop is not a value in Haskell is a non-standard one, which I've seen argued before, and which you may hold, but which I and I think most Haskellers will not agree with. – Ørjan Johansen Jun 10 '14 at 08:04
  • @ØrjanJohansen Makes sense. You would either need to evaluate the first or the second argument. If either of them are `undefined` then you would an error. – Aadit M Shah Jun 10 '14 at 08:09
  • I never said that an infinite loop is not a value. It is a value. It has a type. You can restrict it to any type beside bottom. However the expression will never bottom out. – Aadit M Shah Jun 10 '14 at 08:15
  • @AaditMShah Ah, sorry if I misunderstood you. – Ørjan Johansen Jun 10 '14 at 08:16
  • @AaditMShah How is that related to the Applicative-Monad superclass proposal? – David Young Jun 10 '14 at 19:52
  • @DavidYoung I stand corrected. It isn't. Hence I edited my answer. – Aadit M Shah Jun 11 '14 at 03:12
3

The rules that are generated by the definition of Maybe are

x :: a
---------------
Just x :: Maybe a

and

a type
-----------------
Nothing :: Maybe a

Along with

a type
------------------
bottom :: a

If these are the only rules which result in Maybe A then we can always invert them (run from bottom to top) in proofs so long as we're exhaustive. This is argument by case examination of a value of type Maybe A.

You did two cases analyses, but weren't exhaustive. It might be that u or v are actually Nothing or bottom.

J. Abrahamson
  • 72,246
  • 9
  • 135
  • 180
3

A useful tool to learn when proving stuff about Haskell code is Agda: Here is a short proof stating what you want to prove:

data Maybe (A : Set) : Set where
  Just : (a : A) -> Maybe A
  Nothing : Maybe A

_<*>_ : {A B : Set} -> Maybe (A -> B) -> Maybe A -> Maybe B
Just f <*> Just a = Just (f a)
Just f <*> Nothing = Nothing
Nothing <*> a = Nothing

pure : {A : Set} -> (a : A) -> Maybe A
pure a = Just a

data _≡_ {A : Set} (x : A) : A → Set where
  refl : x ≡ x

_∘_ : {A B C : Set} ->
      (B -> C) -> (A -> B) -> A -> C
_∘_ f g = λ z → f (g z)

maybeAppComp : {A B C : Set} -> (u : Maybe (B -> A)) -> (v : Maybe (C -> B)) -> (w : Maybe C)
            -> (u <*> (v <*> w)) ≡ (((pure _∘_ <*> u) <*> v) <*> w)
maybeAppComp (Just f) (Just g) (Just w) = refl
maybeAppComp (Just f) (Just g) Nothing = refl
maybeAppComp (Just f) Nothing (Just w) = refl
maybeAppComp (Just f) Nothing Nothing = refl
maybeAppComp Nothing (Just g) (Just w) = refl
maybeAppComp Nothing (Just a) Nothing = refl
maybeAppComp Nothing Nothing (Just w) = refl
maybeAppComp Nothing Nothing Nothing = refl

This illustrates a couple of points others have pointed out:

  • Which definitions you use are important for the proof, and should be made explicit. In my case I did not want to use Agda's libraries.
  • Case analysis is key to making these kinds of proofs.
  • In fact, the proof becomes trivial once case analysis is done. The Agda compire/proof system is able to unify the proof for you.
nulvinge
  • 1,600
  • 8
  • 17
  • 1
    Thanks, +1 for the Agda approach although I cannot make any sense of it. :) Just curious to know, if the function `maybeAppComp`is auto-generated or you explicitly write the 8 cases ? – Sibi Jun 10 '14 at 20:25
  • 1
    Agda has an interactive mode that works in tandem with you. At first the code looks like this: `maybeAppComp u v w = ?` Then you put the cursor on the question mark enters the variable `u`, and presses `C-c C-c` which does the case analysis for you. If this is done for all variables you end up with: `maybeAppComp (Just f) (Just g) (Just w) = ?` And so on. Then you put the cursor on the question mark, and press `C-c C-a`, which instructs Agda to find a proof. It finds the trivial proof of `refl`. The only thing you explicitly have to write is the type of the proof. – nulvinge Jun 11 '14 at 06:14
  • The type of the proof is quite interesting. It says it can take any values of u v w and construct the desired proof – nulvinge Jun 11 '14 at 06:20
  • Thanks, looks very interesting. Will try to learn this tool. :) – Sibi Jun 11 '14 at 06:24
2

You translated the use of (<*>) through fmap. The other answers also do some pattern matching.

Usually you need to open the definition of the functions to reason about them, not just assume what they do. (You assume (pure f) <*> x is the same as fmap f x)

For example, (<*>) is defined as ap for Maybe in Control.Applicative (or can be proven to be equivalent to it for any Monad, even if you redefine it), and ap is borrowed from Monad, which is defined as liftM2 id, and liftM2 is defined like so:

liftM2 f m1 m2 = do
    x <- m1
    y <- m2
    return $ f x y

So, reduce both left- and right-hand sides to see they are equivalent:

u <*> (v <*> w) = liftM2 id u (liftM2 id v w)
 = do
     u1 <- u
     v1 <- do
             v1 <- v
             w1 <- w
             return $ id v1 w1
     return $ id u1 v1
 = do
     u1 <- u
     v1 <- do
             v1 <- v
             w1 <- w
             return $ v1 w1
     return $ u1 v1
 -- associativity law: (see [1])
 = do
     u1 <- u
     v1 <- v
     w1 <- w
     x <- return $ v1 w1
     return $ u1 x
 -- right identity: x' <- return x; f x'  == f x
 = do
     u1 <- u
     v1 <- v
     w1 <- w
     return $ u1 $ v1 w1

Now, the right-hand side:

pure (.) <*> u <*> v <*> w
 = liftM2 id (liftM2 id (liftM2 id (pure (.)) u) v) w
 = do
     g <- do
            f <- do
                   p <- pure (.)
                   u1 <- u
                   return $ id p u1
            v1 <- v
            return $ id f v1
     w1 <- w
     return $ id g w1
 = do
     g <- do
            f <- do
                   p <- return (.)
                   u1 <- u
                   return $ p u1
            v1 <- v
            return $ f v1
     w1 <- w
     return $ g w1
 -- associativity law:
 = do
    p <- return (.)
    u1 <- u
    f <- return $ p u1
    v1 <- v
    g <- return $ f v1
    w1 <- w
    return $ g w1
 -- right identity: x' <- return x; f x'  ==  f x
 = do
    u1 <- u
    v1 <- v
    w1 <- w
    return $ ((.) u1 v1) w1
 -- (f . g) x  == f (g x)
 = do
    u1 <- u
    v1 <- v
    w1 <- w
    return $ u1 $ v1 w1

That's it.

[1] http://www.haskell.org/haskellwiki/Monad_laws

Sassa NF
  • 5,306
  • 15
  • 22
  • +1 This is nice, yet using the Monad laws instead of the Applicative ones makes the exercise a bit different, I think. – chi Jun 10 '14 at 15:19
  • @chi well, what do you do if `(<*>)` is defined through `ap`? I don't see it overloaded through `fmap` for `Maybe`, am I missing something? That's the reason I find the solutions proposed here not strictly correct. – Sassa NF Jun 10 '14 at 17:15
  • `<*>` is not defined as `ap`: rather, they must be equal when the applicative functor happens to be a monad as well. In the general case, the applicative functor is not a monad, yet `pure` and `<*>` exist and satisfy the [applicative laws](http://en.wikibooks.org/wiki/Haskell/Applicative_Functors). – chi Jun 10 '14 at 17:53
  • In the specific case of the Maybe functor, though, you are right in that it IS defined as `ap`. Yet using the monad laws to prove the applicative ones feels "wrong" in the sense that one is using a more powerful general law to prove a more specific one. It would feel nicer if `<*>` was defined more explicitly without resorting to `ap`. Alternatively, one can take the definition of `ap`, avoid assuming it satisfies the monad laws, and prove the applicative laws for Maybe using such definition. – chi Jun 10 '14 at 17:59
  • @chi sure. My main point I still would like to emphasize that one needs to look at the definition. It is possible to define `(<*>)` differently for `Maybe` (for example, using pattern-matching), but then the topic starter should have specified it - and it would become obvious, for example, that the `Nothing` case should be covered, and what to do with the `undefined` case. – Sassa NF Jun 10 '14 at 18:51