Functors, Applicatives, and Monads: The Scary Words You Already Understand
21 points by nemin
21 points by nemin
Having learned Haskell before using rust, gleam, or elm, I can relate to the steep learning curve of the mathematical jargon used everywhere in Haskell. As I was learning I constantly had to mentally rename the category theory concepts into more relatable programming terms. I felt like I stumbled far more than required to understand what was effectively just names for abstractions that I already understood but just didn't know there was a specialized term for.
Terms like monads, monoids, functors, bifunctors, applicatives, arrows, etc are all incredibly alienating if you are not already familiar with them.
However, having now learned and understood the bounds of these abstractions and their associated functions and infix operators in Haskell I GREATLY prefer their ability to be used with entire classes of types (TypeClasses) instead of the gleam/rust/elm way of having per-type functions (Map.map, Array.map, Option.map, etc). In non-Haskell FP languages I will frequently want to use a generalized pattern of, say, Applicatives to chain transformations through a Maybe/Option here and a Result/Either there, but keep getting caught up needing to remember exactly which type I am using to call the associated function or triggering the LSP to find it. This is, in essence, the power of typeclass abstraction that languages like Haskell can provide: lots of differently "shaped" structures start to look and behave the same conceptually when applying things to data passing through them.
I'm convinced if I learned gleam/elm/rust first I would have a different impression, but having gone through the learning-through-abstraction experience I now appreciate its power.
Yup, it can be especially annoying when they rename them between the concrete classes (bind vs and_then for example).
There really ought to be a monad-tutorial checklist. The biggest issue is working in Elm, which deliberately cannot support the level of abstraction required. The next two issues are classic omissions of monad tutorials: eliding the algebraic laws and claiming that functors are always containers for values.
When I was a math TA in college one of my favorite exercises was "find four functions that each break exactly one property of metric spaces." So one function that wasn't symmetric, one that violated the triangle inequality, etc.
It'd be cool to see the same for monads.
(Another good exercise is "what is the (real) derivative of log(log(sin(x))?")
My experience is that no one understands monads until the third time they are explained. It doesn't matter what order the explanations come in, or how each explanation is presented. Then, after the third explanation, they don't understand why you thought it was a difficult concept.
My experience is that any number of abstract explanations isn’t good enough. You have to use this stuff in code complex enough to make it useful, and that code won’t fit into a tutorial.
The approach used here of introducing the concrete types’ operations individually and then generalizing into typeclasses works best as an explanatory approach, but if you don’t then motivate typeclasses themselves, the whole thing continues to seem like academic obscurantism.
If the only motivation for Functor is the function doubleInContext, nobody’s buying it. But so many actual examples (e.g,, using Applicative with parser combinators) are a leap too far. And that’s the relatively easy to get Applicative…monads are another level.
I think the problem with monads is that the example is always the Maybe monad, which is a very trivial example. It's practically the simplest possible monad and barely motivates the concept. It's like explaining multiplication to someone and then showing a bunch of examples where the multiplicand is always 1.
Please make it and I will upvote it.
Also add to the list: the belief that there are (serious) monad tutorials comparing monads to burritos. Although at this point there probably are.
I'll start it off.
Dear friend,
You have declared that Monads are not
[ ] Scary
[ ] An academic distraction
[ ] Useful
[ ] Useless
but are instead
[ ] Simple
[ ] Containers
[ ] Easily or already understood
[ ] Burritos (joking)
[ ] Burritos (not joking)
[ ] A waste of time
[ ] Some other metaphorical contrivance (see http://byorgey.wordpress.com/2009/01/12/abstraction-intuition-and-the-monad-tutorial-fallacy/)
I regret to say that what you have declared is
[ ] Untrue
[ ] False
[ ] Wrong
[ ] Misleading
OK, this comes off a tad bit too hostile and I certainly don't want to gatekeep people from making tutorials. So let me amend my comment by saying I'll upvote it if it's more clearly in jest.
Monads for functional programming is my go-to recommendation for this topic.
On the other hand, is the box analogy so bad for people who know nothing about any of this? I think the audience is important, and starting with algebraic laws may scare people away
is the box analogy so bad
I would say at best it's useful for a brief "lying to children" period, because when you have to explain Reader or IO or even just fmap (+1) (*2), the box analogy gets strained to breaking point. This article buries the reality in a footnote, but the footnote is exactly why the whole concept is worth bothering with!
I think so. Because it doesn’t translate cleanly to IO… PutStrLn :: string -> IO () is not a unit wrapped in something… and get line :: IO String is a recipe for reading lines… that’s where I think it breaks down, when you use monads to track effects…
This is mostly a good overview. One significant missing piece is that in Haskell the typeclass (applicative) behaviour depends on the datatype (maybe, validation).
Maybe is implemented as a monad instance whilst validated is applicative. In practice this means that the map2 function short circuits on errors with one type and runs them all gathering errors with validated.
Things like this are what give these types such expressive power. Whilst you don't need to study category theory to understand them, it's worth investing a bit of time to understand the basics.
On the other hand it is fine for languages and libraries that obscure the real names and all their nuance to make these useful concepts more broadly useful.
(a -> b) -> T a -> T b
T (a -> b) -> T a -> T b
(a -> T b) -> T a -> T b
With appropriate rules. :-)
These are the type signatures of the primary functions of the type classes in Haskell, sure (Functor fmap, Applicative <*>, and Monad >>=, respectively) but they alone explain none of the powers of distinction, effects of context carrying, or intuition behind the abstractions.
Knowing the signatures could help you stumble through type tetris using external libraries but it won't help you when deciding how to model your stream processor or coroutines. Context modeling is very important in FP languages.