1

Consider the following code segment

import Control.Monad.State

type Stack = [Int]

pop :: State Stack Int 
pop = state $ \(x:xs) -> (x, xs)

push :: Int -> State Stack () 
push a = state $ \xs -> ((), a:xs)

stackManip :: State Stack Int
stackManip = do 
    push 3 
    a <- pop 
    pop

As we know, the do notation in Monad is the same as the >>= operator. We can rewrite this segment to:

push 3 >>= (\_ ->
pop >>= (\a ->
pop))

the final value about this expression is not related to push 3 and the first pop, no matter whatever the input, it just returns the pop, so it won't firstly put the value 3 into the stack and pop it, why would this happen?


Thanks for your reply. I have added the missing code (implementation for Stack, push and pop) above, and I think I have figured out how it works. The key to understanding this code is understanding the implementation of State s monad:

instance Monad (State s) where 
    return x = State $ \s -> (x, s) 
    (State h) >>= f = State $ \s -> let (a, newState) = h s 
                                        (State g) = f a
                                    in g newState

The whole implementation of >>= do is passing a state to the function h, computing its new state and pass the new state to function g implied in f, getting the newer state. So actually the state has changed implicitly.

Tri
  • 113
  • 7
  • 5
    No time for a detailed answer, but it's essentially because of how the State monad's `>>=` is defined. It ensures that when the state is altered by one expression, the updated value is passed to the next one. – Robin Zigmond Mar 02 '20 at 07:18
  • 2
    How are `Stack`, `push`, and `pop` defined? – Mark Seemann Mar 02 '20 at 08:30
  • 2
    what do you mean "it won't"? of course it will, you wrote as much in your code. You first push 3, then pop it (negating the push, yes, but why should the compiler care? in any case it's unobservable (except by examining the compiler-produced code)), then do another pop, the value of which is returned (presumably, this is how `pop` is defined). so the equivalent snippet is `push 3 >>= (\() -> pop >>= (\3 -> pop))`. (probably, if that's how `push` is defined). what exactly is your question? Haskell is YAPL, you the programmer are in charge. – Will Ness Mar 02 '20 at 10:55
  • `>>=` does not just return the second thing! – user253751 Mar 02 '20 at 13:30
  • 1
    The whole point of the `State` monad is to hide the explicit passing of the state (in this case, the stack being manipulated). – chepner Mar 02 '20 at 14:03
  • The final value of the expression is a state transformer which takes an initial stack, pushes 3 on to it, pops a first value, pops a second value, then returns the first value and the resulting stack. It is *absolutely* related to `push 3`. – chepner Mar 02 '20 at 14:06
  • 1
    @chepner I think you're mistaken there. it does return the value from the *second* pop AFAICS. it ignores the value from the first pop (which is the same 3 it had just pushed just prior to the first pop). – Will Ness Mar 02 '20 at 15:47
  • Yeah, I was thinking the `a` was bound for a reason, but it never gets used. – chepner Mar 02 '20 at 15:53
  • What observations do you make that support the claim "it won't firstly put the value 3 into the stack and pop it"? What does "this" refer to in "why would this happen"? – Daniel Wagner Mar 02 '20 at 19:27

2 Answers2

1

Because of the Monad Laws, your code is equivalent to

stackManip :: State Stack Int
stackManip = do 
    push 3 
    a <- pop    -- a == 3
    r <- pop
    return r

So you push 3, pop it, ignore the popped 3, pop another value and return it.

Haskell is just another programming language. You the programmer are in control. Whether the compiler skips the inconsequential instructions is up to it, and is unobservable anyway (except by examining the compiler-produced code, or measuring the heat of CPU as it performs your code, but it might be a bit hard to do in the server farm beyond the Polar Circle).

Will Ness
  • 70,110
  • 9
  • 98
  • 181
1

Monads in Haskell are sometimes referred to as "programmable semicolons". It's not a phrase I find particularly helpful in general, but it does capture the way that expressions written with Haskell's do notation have something of the flavour of imperative programs. And in particular that the way the "statements" in a do block get combined is dependent on the particular monad being used. Hence "programmable semicolons" - the way successive "statements" (which in many imperative languages are separated by semicolons) combine together can be changed ("programmed") by using a different monad.

And since do notation is really just syntactic sugar for building up an expression from others using the >>= operator, it's the implementation of >>= for each monad that determines what its "special behaviour" is.

For example, the Monad instance for Maybe allows one, as a rough description, to work with Maybe values as if they are actually values of the underlying type, while ensuring that if a non-value (that is, Nothing) occurs at any point, the computation short-circuits and Nothing will be the overall result.

For the list monad, every line actually gets "executed" multiple times (or none) - once for each element in the list.

And for values of the State s monad, these are essentially "state manipulation functions" of type s -> (a, s) - they take an initial state, and from that compute a new state as well as an output value of some type a. What the >>= implementation - the "semicolon" - does here* is simply ensure that, when one function f :: s -> (a, s) is followed by another g :: s -> (b, s), that the resulting function applies f to the initial state and then applies g to the state computed from f. It's basically just function composition, slightly modified so as to also allow us to access an "output value" whose type is not necessarily related to that of the state. And this allows one to list various state manipulation functions one after another in a do block and know that the state at each stage is exactly that computed by the previous lines put together. This in turn allows a very natural programming style where you give successive "commands" for manipulating the state, yet without actually doing destructive updates, or otherwise departing from the world of pure functions and immutable data.

*strictly speaking, this isn't >>= but >>, an operation which is derived from >>= but ignores the output value. You may have noticed that in the example I gave the a value output by f is totally ignored - but >>= allows that value to be inspected and to determine which computation to do next. In do notation, this means writing a <- f and then using a later. This is actually the key thing which distinguishes Monads from their less powerful, but still vital, cousins (notably Applicative functors).

Robin Zigmond
  • 17,805
  • 2
  • 23
  • 34