Normally in Haskell we define Monad
s in terms of return
and >>=
. Sometimes it's convenient to decompose >>=
into fmap
and join
. The Monad
laws for these two formulations are well known and fairly intuitive, once you get used to them.
There's another way to define monads, in terms of an Applicative
functor:
class Applicative f => MyMonad f where
myJoin :: f (f a) -> f a
I'm wondering about the laws for this kind of formulation. Obviously, we could just adapt the fmap
+ join
laws, as follows (I am not sure the names are particularly apt, but oh well):
myJoin . myJoin = myJoin . (pure myJoin <*>) ('Associativity')
myJoin . pure = myJoin . (pure pure <*>) = id ('Identity')
Clearly these conditions are sufficient for pure
, (<*>)
, and myJoin
to form a monad (in the sense that they guarantee that m `myBind` f = myJoin (pure f <*> m)
will be a well-behaved >>=
). But are they necessary as well? It seems at least possible that the additional structure that Applicative
supports above and beyond Functor
might allow us to simplify these laws -- in other words, that some feature of the above laws might be otiose given that it is known that pure
and (<*>)
already satisfy the Applicative
laws.
(In case you're wondering why we'd even go to the trouble of bothering with this formulation given either of the two standard possibilities: I'm not sure it's all that useful or perspicuous in programming contexts, but it turns out to be so when you use Monad
s to do natural langauge semantics.)