State monad recap
We've seen previously that the state monad can be used to create computations in which each step produces a result and also updates a running state. For example, we can create a computation that maintains state on a stack, like this:
let comp =
state {
let! a = popC
do! pushC 5
do! pushC 8
let! b = popC
return a - b
}
In English, here's what this computation does:
- Pops the top value off the stack and saves the result in a variable called
a
. - Pushes 5 onto the stack.
- Pushes 8 onto the stack.
- Pops the top value off the stack (which we know is 8) and saves the result in a variable called
b
. - Returns the value of
a - b
as the final result of the computation.
We can run the computation on a particular stack like this:
let stack = [9; 0; 2; 1; 0] |> Stack.ofList
Stateful.run stack comp
The result is 9 - 8 = 1
and the final state of the stack is [5; 0; 2; 1; 0]
.
Reverse state monad
It turns out that we can create a similar monad in which the results (i.e. named values within the computation) continue to flow as expected, but changes to the state occur in reverse order, almost as if they are traveling backwards in time! I don't think there are many practical uses for such a computation - it's just interesting to think about.
Let's start out by reversing the same computation we walked through above:
let rcomp =
rstate {
let! a = popC
do! pushC 5
do! pushC 8
let! b = popC
return lazy (a.Value - b.Value)
}
This looks almost identical, except we're using a computation builder for the reverse state monad (rstate
) instead of the state
builder. However, one very important difference is that all of the results are lazy values. This is crucial for avoiding infinite regress. For example, the results a
and b
are both int
s in the original example, but here they both have type Lazy<int>
instead. We're free to use these results lazily within the computation, but we can't do anything that would trigger their evaluation prematurely. This is why the return value is carefully written to compute a - b
lazily. The complete computation has type RStateful<Stack<int>, Lazy<int>>
, which means that it's a reverse state computation that works on a stack of integers and returns a lazy integer as a final result.
Here's what happens when we execute this computation:
- Since state flows in reverse order, first the top value is popped off the stack and assigned to
b
. - Moving backwards,
8
is pushed on the stack. - Then
5
is pushed on the stack. - The top value is popped off the stack (which must be 5) and assigned to
a
. - Lastly, we return
a - b
lazily.
Remember: Results flow down, but state flows up.
We can run this computation on the same input stack as before:
let stack = [9; 0; 2; 1; 0] |> Stack.ofList
RStateful.run stack rcomp // reverse state monad
The result is 5 - 9 = -4
and the final state of the stack is [8; 0; 2; 1; 0]
.
How!?
This seems ridiculous, but it works fine. The key, as usual, is in the bind
method:
let bind binder rstateful =
RStateful (fun state ->
let rec resultAfinal = lazy (run intermediate.Value rstateful)
and resultA = lazy (fst resultAfinal.Value)
and final = lazy (snd resultAfinal.Value)
and resultBintermediate = lazy (run state (binder resultA)) // lazy result passed to binder
and resultB = lazy (fst resultBintermediate.Value)
and intermediate : Lazy<_> = lazy (snd resultBintermediate.Value)
resultB.Value, final.Value)
This is considerably more complex than the corresponding bind for the normal state monad, so I'm not going to walk through it in detail. The important thing to notice is the initial state
is passed to the second step (binder
), and then the intermediate
state is passed to the first step (rstateful
). This is the reverse of how the normal state monad works. However, the result of the first step (resultA
) is still passed to the second step, albeit lazily, and then the result of that step (resultB
) is returned.
Again, I want to emphasize that there's not much practical reason to do this. It exists because it can be done, and that's enough for me. :)
References
Crazy functional programming ideas usually come out of Haskell, and this one is no exception. There's not a lot of information about it, but here are some links that might be useful:
Top comments (0)