Or: The List Monad
Last year, I introduced monads under a more “intuitive” light—focus was placed on the semantics of monads rather than formal definitions, turning to the Maybe monad as a first contact. The following assumes the reading of said article.
When discussing functions and monads, we leaned on the analogy of a box that contained our values—an object of type Maybe is a box that may or may not contain a value in it. The box provides context for the value.
As in so many aspects of life, context is key. A monad represents a computation described by a combination of context and value. This pompous term should become less opaque as we go over a few instances:
Maybe: the context is that there may or may not be a value, therefore one might consider that the monad represents a computation that, by nature, can fail.
Promise: it is safe to assume you’ve run into this one. It represents a computation that is expected to not be instantaneous, and that may fail in its attempt of fulfilment. The context is the state of that computation: waiting, resolved (fulfilled), rejected.
List: we’ve seen how lists constitute a functor. We can now look further and see how they constitute a monad (reminder: a monad is a specialised type of functor). Lists are generally interchangeably thought of as sequences, sets, or any other categorisation of a multitude of objects. However, in the broader world of programming, the things that a list holds tend to be limited to pieces of data prime for consumption—e.g., a collection of posts, where each item is a tangible post. The List monad proposes something more interesting: what if we used lists to represent the different possible outcomes of one particular computation?
As I was taught in school, a function maps any given
α to at most one value
f(α)—perhaps less, but no more. This example brings me to the idea that, to many, Mathematics involve precision, directness, immediacy. These touted qualities are still true, but so is versatility—Maths can accommodate more complex scenarios, including modelling the uncertain.
Math.random. It is quite the opposite of referentially transparent: every call to
Math.random() will return a presumably different real number between 0 and 1.
In order for
Of course, in the case of
Math.random, there is a mathematical infinity of results. So let that sink in for a while as we look at a simplification of this problem.
Consider coin flipping. It has two possible outcomes: heads and tails. An impure implementation of a flipper would look like:
const flip = () => Math.random() < 0.5 ? 'heads' : 'tails'
Nice and easy, but how could
flip be implemented in a pure fashion? Two alternative solutions follow.
Algebra—unlike the number crunching and other calculations we expect computers to perform for us—is concerned with symbols and the rules for manipulating them. Algebra is a powerful layer of abstraction between numbers (and functions, relations, etc.) and their value in our physical world. It frees us from wordly dimensions, allowing us to discuss and study new classes of numbers.
1/0 -> Infinity Infinity * -1 -> -Infinity Infinity * 0 -> NaN
Infinity is a concept that a computer, constrained by bound space and time resources, could never process numerically, but here the virtual machine (VM) is aware of a special entity,
Infinity, and is aware of a set of rules to support operations with it as operand.
OneOf such that:
const flip = () => OneOf('heads', 'tails')
Simulating multiple coin flips might look like the following:
// conjunction operation OneOf('heads', 'tails') .and(OneOf('heads', 'tails')) // evals to OneOf( ['heads', 'heads'], ['heads', 'tails'], ['tails', 'heads'], ['tails', 'tails'])
That is, flipping a coin twice has four possible outcomes: all the possible combinations of
(heads OR tails) AND (heads OR tails).
OneOf is implementable, and here’s a similar sandbox. I also have a related implementation of list shuffling based on the same concept. However, the point of this article is to show off the ease and conciseness of the list monad for modelling such problems, which leads us to the next section.
Previously, we sought to implement a specific structure to represent a coin flip. How tedious! Behold, a pure coin-flipping function:
const flip = () => [ 'heads', 'tails' ]
Done. Yes, really.
Now, because we want to combine multiple coin flips, we’ll nest this flip in another list, thus representing a sequence of coin flips so far comprising a single flip:
const flip = () => [ [ 'heads' ], [ 'tails' ] ]
A slight non-essential clarification of the above: the outer list contains all the possible outcomes of our coin tosses. Since we have only thrown the coin once, this list expectedly contains two items. These items, i.e. the inner lists, represent each possible sequence of coin tosses. Again, since we have only thrown the coin once, each individual sequence only contains one item. Thus, the outer and inner lists have quite different semantics; ideally the host programming language would offer some typing to accommodate this, but for this article we are intentionally stretching the affordances of the simple list type.
The list monad encodes a superposition of possible states. Now, you’ll hopefully remember that monads excel at composition, particularly at what is known as chaining or binding. Well, throwing a coin twice now amounts to chaining two throws:
const flip = (previous = ) => [ [ ...previous, 'heads' ], [ ...previous, 'tails' ] ] const flipTwice = () => flip().chain(flip) flipTwice() -> [ [ 'heads', 'heads' ], [ 'heads', 'tails' ], [ 'tails', 'heads' ], [ 'tails', 'tails' ] ]
The conciseness should feel all but magical. Here’s a sandbox you can play with.
Remember: each monad has its own way of implementing chaining, but typically chaining amounts to joining the result of fmapping a function over a monad.
fmap is a generalization of
map. In the case of the list monad, here’s how we can define
join = flatten fmap = map chain = (fn, monad) => join(fmap(fn, monad)) // equivalent to chain = (fn, listMonad) => flatten(map(fn, listMonad))
It’s a good exercise to process the above and mentally go over what happened when chaining coin flips.
It’s even more interesting to consider what chaining means. If a list represents an indeterminate state by listing all the possible ones, then chaining that with an operation that itself generates more indeterminate states implies combining all the possible outcomes. Thus we multiply the number of outcomes: each coin flip yields
2 outcomes, so three coin flips yield
2 · 2 · 2 = 8 outcomes. As a bonus, here’s an equivalent implementation using
lift, which simplifies our
flip down to
() => [ [ 'h' ], [ 't' ] ].
Using lists to represent superposed states can perhaps be compared to probability trees:
The notable difference from trees is that our list representation doesn’t express the weights or probabilities of each outcome. This is adequate as long as all outcomes are equiprobable—as was our coin toss assumed to be—or if otherwise the point is to represent all outcomes equally.
Our monad is ill-equipped for probability calculation, but a more sophisticated monad wouldn’t. Remember that a monad consists of a value with a context. This context could theoretically bear a factor for weight or probability. The following is achievable:
flip() -> [ [ 0.5, 'Heads' ], [ 0.5, 'Tails' ] ] flip().chain(flip) -> [ [ 0.25, 'Heads', 'Heads' ], [ 0.25, 'Heads', 'Tails' ], [ 0.25, 'Tails', 'Heads' ], [ 0.25, 'Tails', 'Tails' ] ]
Returning to Earth
The point of this article isn’t to argue that pure functions for randomness are better or that
Math.random is bad. Quite orthogonally, it aims to bring a new perspective with a tool that is simultaneously powerful and simple—almost crude, like a blunt instrument.
The previous article, focusing on the Maybe monad, revealed that having a first-class representation of a particular kind of computation is a step towards added composability, less manual stitching and room for human error, better expressiveness—which in the right programming languages can translate in intelligent type safety. Such languages often also natively implement some form of lazy evaluation, making large indeterminate states actually scalable.
Where the Maybe monad absorbed some explicit control structures (i.e. chaining Maybes eliminated the need for a series of
return statements), the List can go further in this trend that I would like to call “delaying the effects”. A program can remain predictable for longer—and, subsequently, testable, analysable, etc.—if it can keep operating on pure values—both plain and monadic—and if the effects can be pushed to the outer layers of the program. In the case of randomness, we postpone the effect or the decision of picking one of the outcomes. Once again, Promises are a good example which should be familiar to those who work with modern Web technologies; think of how much explicit control they absorb, especially when compared to callbacks.
Lists are a stepping stone in understanding other monads such as I/O and State. Beneath their quasi-mystical—and definitely elitist—aura lie relatively simple solutions to the problem of encapsulating effects.
In closing, I’ll note that the observations from the previous article still ring very true to me in justifying why one should ever care about these patterns:
[…] the exposure to FP as an expression paradigm can be impactful enough that it shakes the foundations of what one thinks programming is.
Written with WordPress’s Gutenberg editor.