Functors & Monads: An Introduction

Following some in-person chats on a number of concepts of functional programming, my team pushed me to try to share and present some of these to a wider audience. Admittedly, finding online resources on FP that are both palatable and reasonably sized is not always easy. This article was written in December 2015 and was my best attempt—in my own perspective and with my own analogies—to talk about what lies beyond the obscure term monad by starting with functors.


First things first: functors. Several things can be called functors, but we’re sticking with Haskell’s take, in which they are things that can be mapped over. Grossly speaking, they require:

  • a type (and constructor)
  • [not required, but common] an “empty form” (you’ll see what I mean)
  • their own implementation of map, where:
    • map( f, Functor( x ) ) === Functor( f( x ) )
    • map( f, EmptyForm ) === EmptyForm

A very familiar example of this is list:

  • it is constructed via syntax: [ x ] or via a function like [].concat( x ), or List.of( x ) in Immutable.js
  • its empty form is []
  • it adequately implements map:
    • map( f, [ x ] ) === [ f( x ) ]
    • map( f, [] ) === []—essentially, it represents a no op

Other type classes are also functors, and we recognize them because they implement map. Outside the realm of purely functional languages, we all know promises; the subtle bit is that their map is usually aliased to then and has other abilities—more on that later.

Effectively, functors are like boxes that hold values. When we map over a list, we basically provide the box with a transformationi.e., the function f in map( f, list )—and the box takes care of applying it to the contained value(s) for us. And we can do this even if there are no values, as in the case of the empty list! map( f, [] ) will never fail, it will gracefully result in a no-op; how convenient and safe!

This holds for any functor. In the case of promises, the box knows how to apply the transformation better than we do: if the value in the box has been rejected, the box will know not to apply it; if the value hasn’t been resolved yet, it will wait patiently. Plus, as map returns a new box (i.e., no mutations) with something different inside, functors can be successively mapped over. Take that, Fortran.

A parenthesis on abstractions

“Functor”? Meh, it’s just a dumb list and you’re overthinking this.

As we transition into the idea of monads, please keep this box analogy in mind, even though any analogy is ultimately limited. Boxes can provide isolation. As so many other constructs, they are and allow potentially useful abstractions. And useful abstractions, particularly in the functional world, are those that help us solve our problems more effectively and more efficiently.

Even if we don’t immediately see a use case for an abstraction (or even if we dismiss that case by saying “well, I don’t need that abstraction, I can achieve the same with this other thing I’ve always done”), the thing is that: before it is ever materialized as code, an abstraction’s first role takes place in our minds. Problems are solved through thought, and thought needs to be expressed, both to others and to oneself. So, for the rest of this article, I ask that you try to look at these abstractions for what they are and for that inherent value they hold, before considering their day-to-day applications.

Monads: functors++

Now, we think of promises as more “powerful” or “interesting” than lists. There’s a reason for that: promises are part of a more specific family: monads. All monads are functors, but they need to satisfy more laws; indeed, they are stricter kinds of boxes.

These stricter constraints make monads robust enough that we can use them to better represent computations. Don’t let the jargon discourage you, you know these well: you usually describe them using functions. A function call is how we all usually “order” a computation, so to speak. But, then again, we’re bossy and are used to micromanaging—we’ve been taught from the start that programming is about giving orders. By contrast, functional programming is more about expressing thought. And thought is malleable and composable beyond what opaque functions would allow.

How, then, are promises more capable than functors? Consider p.then( f ), where p is a promise for the answer to life, the Universe, and everything: after 7.5 million years the promise will resolve to 42. Now consider the return type of f: it can be a number (p.then( x => x * 10 )); it can be a string (p.then( x => 'The answer is ' + x )); in fact, it can be anything you’d expect from a regular function. But what if f returns a promise?

const getQuestion = ( answer ) => new Promise( … );
p.then( getQuestion ); // may take a while to resolve…

This is where the magic happens: p.then( getQuestion ) doesn’t return a promise of a promise. Instead, it will only return the latter promise, the result of waiting for the answer 42 and then waiting for the question for that answer. Functors can’t do that, only monads and their chaining abilities. We’ll come back to them.

Enough abstraction, show me examples

A common pain

Say you have this function body, which is pretty common in WordPress PHP (we’ll stick to JS syntax for the examples):

definition = getDefinition( word );
if ( is_error( definition ) ) {
  return definition;

if ( ! definition ) {
  return new Error( 'oh noes!' );

definition = translateTo( 'esperanto', definition );
if ( is_error( definition ) ) {
  return definition;

if ( ! definition ) {
  return new Error( 'oh noes!' );

if ( isTooLong( definition ) ) {
  return new Error( 'oh noes!' );

definition = toUppercase( definition );
return definition;


Behind this mess is the notion that many steps in our function may fail. Because of that, we need to check the pipeline at each stage to see if it’s broken, and act accordingly. This is error-prone, and also harms readability—you’re no longer expressing the idea that you’re sending an initial value that might be there (definition) and piping it through validating transformations. It’s also often associated with inconsistent or unpredictable function typing—when we fail, do we return an error object? a null? a neutral value for a certain type (e.g., 0 as the neutral for Number)?

Maybe we can solve this?

Quantum. Introducing the Maybe monad. Maybe is basically Schrödinger’s box: it represents a value that might or might not be inside; it represents a computation that might or might not have failed somewhere along the way.

Types. One word before going further: in JS or PHP, we often refer to a list as just “a list”, but, in reality, that doesn’t tell us the whole story about the type of data we have. In typed languages, you’d find the type List String to say that you have a list containing only strings. That is because List forms a type class, just like other monads—e.g., a promise expected to return a number will have the type signature Promise Number.

Maybe a definition. Back to Maybe. Looking at the example from the previous section, we see that the first statement is a dictionary lookup: getDefinition( word ). This is a lookup for something that may or may not exist in our dictionary. In typical JS or PHP, the function might return an error or a null if it fails. But why have our code diverge from the way we commonly think of this problem? If getDefinition may or may not return a definition, then we say that it returns “maybe a definition”. Assuming the definition is a string, we plainly state that the return value of getDefinition is Maybe String.

Where does that get us? Well, a Maybe String can take one of two shapes: Nothing, or Just 'some string'. The implementation of getDefinition itself can be something like:

return dictionary.has( word )
  ? Just( dictionary[ word ] )
  : Nothing();

N.B.: Some libraries may refer to Just as Some.

Previously, I said that all monads were functors. So a value of type Maybe String is something we can map over! Thus:

return getDefinition( word )
  .map( partial( translateTo, 'esperanto' ) )
  .map( ifElse( isTooLong, Nothing, Just ) )
  .map( toUpper );

Note that, if we stick to regular JS anonymous functions, the above will look like:

return getDefinition( word )
  .map( text => translateTo( 'esperanto', text ) )
  .map( text => isTooLong( text ) ? Nothing() : Just( text ) )
  .map( text => text.toUpper() );

Take a moment to read both of these before any explanation.

Let’s go step by step:

  • getDefinition( word ) returns a box possibly containing some text.
  • With the first map, we take that text (if it exists) and translate it. We get a new box, this time containing either translated text, or nothing.
  • With the second map, we take the text in the box (if it exists), check if it’s too long, and either explicitly return nothing, or explicitly return that text wrapped in a box (Just( text)).
  • But wait! map is already in charge of creating a new box at each step. If the function inside map also returns a box, then… we will have a box within a box. The innermost box will possibly contain text: Just( Just( 'text' ) ), or Just( Nothing() ). Bear with me.
  • You might know flatten as a function operating on lists: flatten( [ [ 1 ], [ 2 ] ] ) === [ 1, 2 ]. Notably, flatten( [ [ 1 ] ] ) === [ 1 ]. It actually works with monads in general, and you use it to merge together two consecutive layers of a monad. So flatten( Just( Just( 'hello' ) ) ) === Just( hello ). So now we’re back to just dealing with one simple Maybe String.
  • The third map behaves like the first.
  • Finally, the return value of all of this is either Nothing or Just( 'SOME TRANSLATED TEXT' ).

Nothing goes wrong. If we end up with nothing, then we know that, somewhere along the process, something failed. And that’s okay—thanks to the monad, if something fails, the chain is safely short-circuited.

Chaining is composing. Because the map then flatten pattern is so common, it is available condensed as flatMap, where myMaybe.flatMap( f ) is basically equivalent to f ).flatten(). However, flatMap is more importantly known as chain or bind, and is one of the central, defining aspects of all monads. We’re not going to cover that here (feel free to look up the monadic laws), but do prefer the term chain, because chaining is really what monads are about. So our example is really even shorter:

return getDefinition( word )
  .map( partial( translateTo, 'esperanto' ) )
  .chain( ifElse( isTooLong, Nothing, Just ) )
  .map( toUpper );

Either this or that

You may argue that the solution above isn’t truly equivalent to the initial imperative snippet because we have no way to tell what when wrong if we end up with a Nothing. That is true, but Maybe felt like the simplest monad to start with. Other monadic solutions can address that, like Either—roughly speaking, a synchronous version of Promise—which I’ll explain at the request of readers.

The point of all this is not to achieve things that would be impossible otherwise; ultimately, any problem can be solved in Assembly or Brainfuck. But look at the last snippet again. Here’s some of what we can easily gain:

  • Expressiveness. We basically say what things are rather than how to build them down to the atom.
  • Less manual coding, less manual checking of the details that may go wrong.
  • Thus, fewer coding errors.
  • Less code complexity. In our example, explicit branching disappeared entirely.
  • Composability and reusability. There is so little that we need to rewrite!
    • We didn’t need to rewrite functions to make them work with monads—we simply mapped.
    • Everything is still function-based, thus we can leverage compose and other higher-order friends.
    • Though it feels like a lot to take in at the moment, these concepts are actually not that many, and they will always be around for reuse. Better abstractions mean less time cooking up data structures for specific purposes. We’re trading in the object-oriented, the dedicated helper modules, the handcrafted traversal algorithms, etc. for the maybes, the promises, the simple data models made out of lists and maps and simple type algebra, the battle-tested generic data-traversal functions from FP libraries, etc.


λ ♥︎. In the end, I do believe “functional programming” allows one to achieve more. However, specific tools and mechanisms only go so far: a lot of the benefits are to be reaped from its concepts, its ontology, its systematic approach to understanding and describing the world and programming itself.

Growing up bilingual, my ability to express an idea in different languages was only a product of the underlying seamless switching between mental profiles. Later in life, as a polyglot, I found that the real asset I’d gained was an intuitive grasp of metalinguistics—why and how languages work, what brings them together and what sets them apart, how to infer and extrapolate their mechanisms. Programming and theory then came to complement this.

Taking an interest in functional programming can be more than a utilitarian project. Like the European individual learning Japanese, the exposure to FP as an expression paradigm can be impactful enough that it shakes the foundations of what one thinks programming is.

Meanwhile, I hope I managed to get some of you excited about the subject. Any feedback will make me very happy.

Author: Miguel Fonseca

Code Wrangler at Automattic. A passionate cyclist, Swing dancer, tree climber and headbanger.

2 thoughts on “Functors & Monads: An Introduction”

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s