This function allows looping over any monadic value. Works for values which represents state, configuration, IO, etc.

The example you copied uses print, I don’t know the imports but I’m guessing it’s the one from Prelude which has type (Show a) => a -> IO (). My manual type inference in my head says the example is using IO:

But this program is polymorphic. It can be instantiated as a monadic value, as long as it supports IO, Reader and State. A common use of this is to have one instantiation for production and one instantiation for tests. Yay, code reuse!!!

So the answer to your question:

How do I tell whether that is a maybe monad, a list monad, a continuation monad, or a state monad?

Forgive me if my understanding is incorrect here, but wouldn’t that only be true if you wrote all three of those functions yourself and a priori defined them in terms of such a triple-constrained monad? What if the three functions were provided by three authors and had types defined only in terms of a subset of the constraints?

I use something like the lens version (see the “Merging transformer layers” section) for other reasons but the point about StateT s1 (MaybeT (StateT s2 Identity)) definitely demonstrates a problem with MTL.

By type signatures. Or, if you’ve got monad-specific functions mixed in like State‘s get and put, then it becomes fairly obvious what sort of monad you’re dealing with. The code shown in the article could even be polymorphic w.r.t. the monad - assuming getData and co. are too.

I tend to use monads for “secondary” concerns - the kind of thing that I would consider leaving completely invisible if I were working in a language without monads. E.g. if working with state is the whole point of a given function then I’d probably have it accept and return the state in question rather than passing it via a state monad. So the <- just indicates “a secondary effect is going on here”, and if I wanted to know the specifics I’d mouse over one of the functions and see what the type signature is.

But yeah in code that mixes multiple monads there are cases where it would be nicer to have a way to distinguish. I wonder about an IDE using e.g. colour or underlines to show what monads were in play.

Okay, sure. But does that really mean that every language would be better off with monads? From what I’ve seen, buying into monads can strew code with long, confusing type signatures and higher-order functions (e.g., flip, liftM) that leave the code more cluttered and removed from its writers’ intentions than the ad hoc solutions do.

Every approach to programming touts the pretty examples its creators thought up while it was being designed, but I believe the true test of such approaches is how ugly and unreadable they get at their worst. Learn Rust from scratch and write a basic program that does more than implement a basic calculator in the first week if you want to prove me wrong. You’ll run headlong into the type system before you know it.*

Now, given that many of the people who clicked through to this thread have probably bought into monads for long enough that they’ve gotten used to them, I’m probably going to get slaughtered. However, even if you are planning to downvote me and write a point-by-point rebuttal about how you had a dandy time with monads/Rust/etc., I hope you understand that I’m not saying that monads and the like don’t fix the problems they set out to fix. In fact, I’m not even saying that they don’t solve those problems well. I’m just saying that any attempt to create a system that abstracts over different types of code will inevitably restrain the programmer themselves, and this can often force them to write more of their code for the sake of the language than for the functionality of the program itself.

Replacing excessive boilerplate with excessive abstractions is jumping out of the frying pan and into the fire. It’s not the job of a good programming language to avoid boilerplate at all costs, but rather to keep the programmer focused on describing the actual behavior of their program; this precludes both boilerplate and abstractions when they are in excess. We must not let the problems we encounter most often while writing code induce us to implement solutions that create unavoidable barriers to entry as applications scale, especially not for what I think is otherwise some of the most innovative, pragmatic, and forward-thinking work in the field of programming languages today.

*This is something the Rust developers are aware of and actively working on, but that just gives you an idea of how far into development language designers can get before they realize they’ve created something that only the most patient, intelligent, and experienced developers can wrangle.

The issue I’ve been noticing is that weaker versions of monads are making it into languages, like Kotlin and Swift which have this ? version of bind. So they have recognized that there is value in the abstraction but only offer the programmer a specialization. But this is a problem because this style of controlling sequencing opens up a lot of doors. For example, while everyone loves to talk about the Maybe monad, that’s actually so 5 years ago. I use the result or Either monad almost exclusively over Maybe. I want to be able to encode the error that happened so I can give the user a more valuable response than “something bad happened”. I use Ocaml and, while not as powerful as Haskell in this regard, I can just switch over to the result monad and get the same pretty code that I had with the Maybe monad (option in Ocaml). You can’t do that in Swift or Kotlin. I’m sure you could hack something in there but in Ocaml and Haskell it’s not a hack, it’s equally elegant no matter what monad you want to use.

For Ocaml, since we don’t have type classes, you explicitly pick the monad you want to use in a function so the function type doesn’t change but the implementation does. I think the functions I use the result monad on would be significantly harder to understand without being to redefine the sequencing operator so it is a net win.

which might print the value of d based on a number of conditions that need to happen at the same time (and we don’t even have the else branches shown which would make the code even more cluttered!). It’s also a lot easier to reason about the code that’s based on the usage of the Maybe monad, at least in my opinion.

Otherwise, I do agree with the point that, at the end of the day we do ship features, not monads.

I don’t think that imperative code is a reasonable point of comparison (which is my biggest criticism of the OP). It’s seemingly trying to play on the fact that a bunch of nested conditionals is really bad and complicated—which I agree with—but I almost never write code like that in imperative languages. Instead, I’d use early returns:

I definitely agree with you more than I disagree with you. This is why I’m not a fan of the idea of adding monads to Rust, and why I like our hack for error handling. (Although it’s not clear that monads in the general case would work well in Rust anyway.)

For example, I rarely write (or need/want) supremely generic code. Adding a type parameter to a type is a big deal and needs to be managed with care. Most uses of generics in Rust I find myself using are things like “give me any argument that can be converted to a file path” or “give me any argument that can be converted into an iterator that yields strings.”

With that said, I haven’t seen much Rust code that elevates to the level of genericism that you see in languages like Haskell, and I generally consider that to be a good thing.

What do you mean when you say this? Monads aren’t a language feature, per se, but they are the combination of a few laws. You can express monads in a lot of ways even if the language has no concept of them. The Future’s library I saw for Rust had a bunch of and_then combinators which were basically monadic. Ocaml doesn’t “have monads” but you can define an operator called bind or >>= and implement the rules. It happens that Ocaml has a fairly pleasant way to express that which makes working with them not-terrible but that has nothing to do with monads. So I can do monads in Ocaml but they aren’t a language feature.

You can implement them in almost anything, but I agree with this article that to have the usefulness they have in Haskell the language needs to: “1. Support ML-style function-fu (trivial currying, single namespace). 2. Have type classes. 3. Allow polymorphism on return types.” (The linked article is about why they haven’t caught on in Common Lisp, despite being implementable.)

Every approach to programming touts the pretty examples its creators thought up while it was being designed, but I believe the true test of such approaches is how ugly and unreadable they get at their worst.

Funny, this is exactly why I think monads are necessary. The ad-hoc solutions are more readable solutions to those specific cases - async/await allows a more natural/friendly way of doing async than using Futures and general-purpose monad operations, ?.-style operators are a more natural/friendly way of doing early return than using Options and general-purpose monad operations. Even in a language that does have monads, the ad-hoc solutions are valuable sugar. But when you want to do something the language creator didn’t think of, if you don’t have general-purpose monads you’re stuffed.

I’m just saying that any attempt to create a system that abstracts over different types of code will inevitably restrain the programmer themselves, and this can often force them to write more of their code for the sake of the language than for the functionality of the program itself.

I understand that’s what you’re saying. I still think you’re wrong. The only things the abstraction restrains you from doing is the things that were wrong, and the wrongness will always bite you sooner or later. I’ve written plenty of law-breaking typeclass instances out of a sense of pragmatism and I’ve always come to regret it.

Replacing excessive boilerplate with excessive abstractions is jumping out of the frying pan and into the fire. It’s not the job of a good programming language to avoid boilerplate at all costs, but rather to keep the programmer focused on describing the actual behavior of their program; this precludes both boilerplate and abstractions when they are in excess.

I suspect you’re saying this because of experience with bad abstractions. Good abstractions are zero-overhead, or at least O(1) overhead: you learn them once and can benefit from them your whole career.

We must not let the problems we encounter most often while writing code induce us to implement solutions that create unavoidable barriers to entry as applications scale

That’s exactly what implementing only the ad-hoc solutions does. When applications scale is precisely when you need the ability to define your own custom monads and reuse generic functions with them.

Even in a language that does have monads, the ad-hoc solutions are valuable sugar. But when you want to do something the language creator didn’t think of, if you don’t have general-purpose monads you’re stuffed.

So clearly there’s a tradeoff between flexibility and simplicity. But given the complex systems to which monads lend themselves, that flexibility won’t benefit most people.

The only things the abstraction restrains you from doing is the things that were wrong, and the wrongness will always bite you sooner or later.

The abstraction may prevent you from doing things wrong, but you’ll never do things right if you can’t wrap your head around it.

I suspect you’re saying this because of experience with bad abstractions. Good abstractions are zero-overhead, or at least O(1) overhead: you learn them once and can benefit from them your whole career.

I was referring to cognitive, not computational, overhead. Low computational overhead; high cognitive overhead. (And by the way, even the best abstractions can leak.)

We must not let the problems we encounter most often while writing code induce us to implement solutions that create unavoidable barriers to entry as applications scale.

That’s exactly what implementing only the ad-hoc solutions does. When applications scale is precisely when you need the ability to define your own custom monads and reuse generic functions with them.

The key phrase here is “barriers to entry”. A scaling application necessitates a simple, not a flexible, interface. Writing more “low-level” code is less work than writing less “high-level” code. You shouldn’t have to work around the interface in any application; all the more so for one that is scaling.

No, not at all. ?. or await implemented as a native language operation is no simpler than ?. or await implemented as a monad. Indeed using a monad makes it harder to overcomplicate it with special cases (e.g. Java’s Option would never have made the mistake of erroring on null if they’d implemented it in a “monad-first” way).

given the complex systems to which monads lend themselves, that flexibility won’t benefit most people.

If a user is happy using ?. and await and understands how they work, it makes no difference to them whether there’s an underlying abstraction that can be generalised or not. The flexibility can wait for them until they need it, and if they never get to the point of solving a problem so complex that they need that power then good for them.

In practice, back when I worked in Java every nontrivial codebase ended up needing a bunch of annotations/reflection/agents that were always a disproportionate source of bugs - not because developers were stupid but because they needed to do something the language couldn’t, so they had to step outside the language. In a language with monads that wouldn’t have happened.

I was referring to cognitive, not computational, overhead.

So was I. Good abstractions save you cognition; instead of having to understand the particular sequencing rules for this new kind of context, you just know it’s a monad and can think about it as such.

And by the way, even the best abstractions can leak.

No, this is a myth, and in fact monads are a good example; they don’t leak.

A scaling application necessitates a simple, not a flexible, interface.

The two dovetail, and the monad interface is remarkably simple.

Writing more “low-level” code is less work than writing less “high-level” code.

Maybe on the initial write, but code is read more than it’s written, so the high-level approach pays dividends for maintainability. Sometimes I’ll write code out longhand to start with and then realise I can simplify it by using State or Writer or some such.

You shouldn’t have to work around the interface in any application; all the more so for one that is scaling.

100% agreed. Monad is the opposite of that though; it’s such an elegant and simple interface that it’s very easy to conform to. (Indeed one sometimes does so accidentally, and only notices that a given type forms a monad once it’s pointed out)

No, not at all. ?. or await implemented as a native language operation is no simpler than ?. or await implemented as a monad. Indeed using a monad makes it harder to overcomplicate it with special cases (e.g. Java’s Option would never have made the mistake of erroring on null if they’d implemented it in a “monad-first” way).

A scaling application necessitates a simple, not a flexible, interface.

The two dovetail, and the monad interface is remarkably simple.

We have different understandings of “simplicity”. I argue that having monads be deeply ingrained in a language’s design can actually make things more complicated. It does matter whether there’s an underlying abstraction, because that abstraction will have to be propagated to all dependent code.

Sidenote: I’m not aware of any monadic implementations of ?. and await and would love to see them! I need to do more research on this front.

I was referring to cognitive, not computational, overhead.

So was I. Good abstractions save you cognition; instead of having to understand the particular sequencing rules for this new kind of context, you just know it’s a monad and can think about it as such.

It is pattern recognition, not inbuilt abstraction, that saves cognitive waste. Abstractions are a good learning aid, but nothing need be forced into the language itself in order to make this learning happen. All the foundational precepts of functional programming can be implemented in straight-up C.

And by the way, even the best abstractions can leak.

No, this is a myth, and in fact monads are a good example; they don’t leak.

In general, one shouldn’t get too comfortable with liberal use of abstraction.

Writing more “low-level” code is less work than writing less “high-level” code.

Maybe on the initial write, but code is read more than it’s written, so the high-level approach pays dividends for maintainability. Sometimes I’ll write code out longhand to start with and then realise I can simplify it by using State or Writer or some such.

You shouldn’t have to work around the interface in any application; all the more so for one that is scaling.

100% agreed. Monad is the opposite of that though; it’s such an elegant and simple interface that it’s very easy to conform to. (Indeed one sometimes does so accidentally, and only notices that a given type forms a monad once it’s pointed out)

Again, committing to an abstraction is a decision that mustn’t be taken lightly. Monad is an elegant and simple mathematical notion; implementation is inevitably hairier. That’s not to say OO is any better with respect to implementation of theory, just that no abstraction is perfect in practice.

liftM et al are usually code smell, but they’re usually less obnoxious than the stuff I end up doing without monads, like checking for error codes or null pointers after every function call, or using .then() or whatever if I’m using some hip java(script) framework that emulates the Either or Maybe monad.

You can usually get rid of them as well, if you use MTL-style monad transformers. A lot of libraries define things in terms of MonadState, MonadIO, etc. constraints so you don’t ever have to call lift<whatever>. I don’t actually remember the last time I used a lift function besides liftIO, which is used for running an arbitrary IO action inside some arbitrary IO-capable monad.

In good monadic code, you stick all the “lift”s and “flip”s and stuff in some file that handles the low-level behavior of your monad, and you can do a really good job of making the “business logic” or whatever you want to call it free of clutter.

In good monadic code, you stick all the “lift”s and “flip”s and stuff in some file that handles the low-level behavior of your monad, and you can do a really good job of making the “business logic” or whatever you want to call it free of clutter.

But isn’t that just replacing one form of “low-level” with another? I thought the entire point of monads was to absolve the developer from having to do things the tedious, “low-level” way in the first place; after all, they are a “high-level” construct.

The difference is that the join between the low level and high level is more continuous. Writing code that works with a State monad and writing code that works with a hidden variable looks very similar, and in both cases you can pretty easily understand what’s going on at the low level or at the high level. But the monad makes it much clearer how the high-level sequencing interacts with the low-level variable manipulation, and you can easily tell how changing one will affect the other.

As we all know, there’s only so much automated systems can do before they’re out of their depth. Trust me, I’d love to wholeheartedly commit to monads as much as the next guy, but their simplicity comes at the cost of long-term ease of use.

It’s also really nice in the sense that how the hidden variable can change is consistent. There is only one bind and return in there and the distinction between what is done and performing it in the monad is clear. In ad-hoc solutions, anything can happen anywhere. So even the nastiest monadic code is significantly more “learnable”, IME, than the nastiest imperative code because one still is constrained on where the nastiness can happen.

The procedure is quite simple; whenever you see a type class constraint, replace it with an explicit argument accepting a method dictionary. Whenever you need to express polymorphism, replace it with a void * and hope the user won’t mess up the arguments. If you were serious about this, you’d also make all functions accept an extra void * that carries “user data”, so that you can pass around closures for your functions. You’d also implement some form of garbage collection/some architecture to allow for managing lifetimes. In the end, monads would be barely recognizable, but they would still be there, I mean, you could write a function that works on all monads.

Obviously, monads will also be much harder to use without lambdas, so you’d have to define a lot of top level functions that are called step_1_of_x, but my point isn’t to demonstrate monads are a good way to program in C anyway.

I see, you’re looking at things operationally. Of course, any sufficiently powerful language can emulate the mechanics of type class dispatch (or any other implementable language feature).

But, for those of us who care about abstractions (like monads or any other algebraic structure), that is kind of missing the point. The point to using abstractions is to have a clearly defined boundary between implementors and users, so that users don’t need to worry about implementation specifics, and implementors don’t need to worry about users’ concrete use cases. The only shared concern between implementors and users should be the abstraction’s semantics. This, I’m afraid, you can’t easily replicate in C.

Yeah, it does degenerate to untyped void * goo when you attempt this in C, but it’s still architecturally the same. You can still express monads in the category of monads. If you were an alien with a flawless mind, you could just as easily express and use your abstractions in C as you would in Haskell.

On a more practical note, you can actually do useful stuff with monads using C++. C++‘s templates are flexible enough to express (something like) typeclass dispatch, and in some ways you can even go beyond Haskell, since you have a tight control over what gets statically specialized. You could, for instance, have deep monad transformer stacks that get compiled down to overhead-free assembly. I’m not saying you should do that, but thinking “C++ doesn’t have monads” will unnecessarily narrow your vision.

It’s not clear to me whether you are talking about the fact templates are instantiated at compile time or the fact templates can be specialized, but in either case you lose expressiveness.

Compile-time instantiation prevents you from having polymorphic recursion (e.g., a Tree<T> containing subtrees of type Tree<List<T>>) or first-class existential quantification (e.g., Java’s infamous wildcards: List<? extends Animal>, although other languages do this more sensibly).

Template specialization reduces the amount of static analysis that can be carried out in a modular fashion, because, well, templates can be specialized anytime, anywhere. With bona fide parametric polymorphism, you can say “my generic definition is guaranteed to have such and such properties”. With templates, you can only say “assuming nobody else specializes my templates or anything else they depend on in ways that contradict my wishes, the resulting specialized code will have such and such properties”. Assuming nobody will specialize your templates is a very big if.

It’s not clear to me whether you are talking about the fact templates are instantiated at compile time…

Yeah, that’s what I meant, but I used the word “specialize” because that’s what haskell uses for something similar to what C++ calls template instantiation. So, in haskell if you have a function f.x a -> [a], you can specialize it for Int -> [Int] with compiler pragmas, and GHC will see it as a possible optimization. In theory this breaks parametricity, since you can easily violate it this way, but you have to be very careful with those pragmas. Which brings us to your second point, in C++, you’re basically always in this “you have to be careful” mode. So, please, for the sake of this discussion forget about how much the compiler helps you avoid shooting yourself in the foot, we’re talking about what can be expressed.

So, for instance, if you have a template in C++, and you’re using it in contexts where all of its specializations need to respect parametricity, you write a comment at the top of its declaration

/* If you intend to specialize this template, please respect parametricity,
* if you don't know what it is, go func* yourself. (* educate yourself in the
* ways of functional programming)
*/

Coming back to your first point, you can very well express polymorphic recursion with C++, if you declare (via comments) List<T> to be parametric, so you can use List<shared_ptr<void *>> to stand for an existential a. You can even wrap this in a type-safe manner within the context of Tree<T>, so, you can for instance define a type-safe Traversible instance for Tree<T>. I would like to implement this for you, but I doubt I’ll find the time to do so anytime soon.

I don’t see where you got the impression that there aren’t far-reaching consequences. Monads are too powerful an interface to expose to the library user under most circumstances, and the art of software architecture is to find the balance between the interface consumer and implementer.

The point still stands though, if your language’s type system and syntax isn’t powerful enough to make monads practical, the chances are you’ll have trouble expressing profunctors from your application domain’s category to the category of endofunctors in your programming language (or whatever abstraction it takes to express your architecture)…

I strongly believe that beneath the hot dark crust of functional programming, there’s a pressure buildup of what people might one day call “categorical programming” once it erupts. When the dust settles, people will probably call Conal Eliott’s Compiling to Categories work one of the early milestones. You can already see the early signs of this upcoming paradigm applied to real world problems in the Haskell community.

The part you haven’t understood may or may not be meaningful for a real world application domain, but the point is, you can model the relationships in your domain abstractly as a number of categories and functors between them a la OLOGS, then your software architecture can be reduced to functors from those categories to some categories you construct using your programming language(s) (plural, as in for example, your back-end, database and front-end might all be part of the model, so you have your entire system along with the users and the world around it modelled). These are all big words, but the surprising thing is some of that mumbo-jumbo is directly expressible in Haskell. We’re actually using some of this stuff at my day job, and we intend to write about it when we have the time.

The part you haven’t understood may or may not be meaningful for a real world application domain…

I sincerely appreciate your saying this. I think the middle ground between our respective positions could be stated in short as follows: “The tenets of FP have amazing application-dependent benefits.”

When the dust settles, people will probably call Conal Eliott’s Compiling to Categories work one of the early milestones.

[If] you can model the relationships in your domain abstractly as a number of categories and functors between them a la OLOGS, then your software architecture can be reduced to functors from those categories to some categories you construct using your programming language(s)…

This all sounds fascinating. Do you know of any interesting papers / talks / blog posts that further what Conal and Spivak & Kent discussed, i.e., preëmpt a move toward real-world application of o-logs to software development?

You could say that about every language feature - it’s always possible to greenspun a given feature into a language that doesn’t have it. I think you can draw a line between languages that let you implement useful monad-generic code /in the language/ and languages that don’t.

I understand what you mean, but I honestly think it doesn’t apply to monads. do notation is a language feature, but “monad” is a concept from abstract math. C is an extreme example, but you could easily find C++, Java or Python projects that would greatly benefit from expressing some core architectural elements through monads.

Monads are a mathematical concept defined in terms of arrows and objects, sure, but the usual sense of “monad” in a programming context comes from identifying those with functions and types, and you can only really say that you have monads “in” the language if you can use them with ordinary functions and types in the language. You can’t really write a monad-generic function in Java because the language won’t let you write the type signature that function should have, and while you technically can work with monads in Python, not having a type checker means you lose most of the benefits. (AIUI this is substantially true in C++ as well; C++ does have type checking but only after template expansion, which makes development of generic code difficult).

Well, macros can do anything; almost any programming language can be viewed as a named, standardized bundle of macros. I think there’s a lot of value in standardizing on the concept of a monad and a single syntax for it; that allows for reuse both at the syntactic pattern level and for library functions like whileM described above.