Erik Meijer on Monads

Recorded at:

Bio Erik Meijer is a Dutch computer scientist and entrepreneur. From 2000 to early 2013 he was a software architect for Microsoft where he headed the Cloud Programmability Team. He then founded Applied Duality Inc. in 2013. Before that, he was an associate professor at Utrecht University. He received his Ph.D from Nijmegen University in 1992.

Software is Changing the World. QCon empowers software development by facilitating the spread of knowledge and innovation in the developer community. A practitioner-driven conference, QCon is designed for technical team leads, architects, engineering directors, and project managers who influence innovation in their teams.

That is another goal that I have in life to explain monads to people so that everybody can understand them. There are many ways to explain monads, so let me try another one that I didn’t do yesterday, that is if you talk about a functions, if I have a function, everybody knows what a function is, if you’re in high school, so let’s look at functions. So, I have a function from int to bool and then you look at that function and think “Why is that notion of a function special?” Because that notion of a function is one particular implementation of the idea of having something that you can compose, so if I have a function from int to bool and a function from bool to characters, now I can compose them and get a function from int to characters, but the fact that this is a function is nothing really special, now I am reinterpreting what mathematicians or the category theorists must have thought and say if this function is one particular implementation of a general concept, what is the interface that a function implements? And then maybe there are different implementations of that same interface. And if we can now reason about functions in terms of this interface and then there are many concrete implementations then we have a much more general framework in which we can reason about these things, because you can prove properties in terms of the interface and then you can plug in different implementations.

Functions are all about composition, because that is what I showed you, I go from A to B, from B to C, then it means I can go from A to C. In some sense category theory is about a mechanism for composition, so let’s step back and see if we as developers would define a system, what is the essence of composition? So, in order to compose things I need to first have a way to go from one thing to another thing, correct? So, let’s say instead of saying “things”, that’s very vague, let’s call them objects. So, I have objects, because those are the things of interest, and then I have to have arrows that connect objects, now I can talk about the composition of these arrows because if I have an arrow from A to B and an arrow from B to C, composition means I also have an arrow from A to C. A graph is another example of this because a transitive graph is something with nodes and edges and I can kind of go through there. So, a graph is another instance of category, just like functions and so there are many other examples like that, that all fall in this general framework. So that’s where a category comes in and the way I look at it is mathematicians doing interface based programming. So they look at many, many concrete mathematical structures and say “These are really all the same, it’s all about composition”. Think about linear algebra where I can take two matrices and multiply them while I get another matrix and I can multiply them again.

So, you can see linear algebra also as an instance of this theory. So that’s there and now imagine that we have a function or an arrow that’s not really well behaved, for example it’s something that can throw an exception or when you traverse the arrow, an exception, you can fall off, think like a person walking over the arrow and sometimes you fall down over a cliff. If I now want to talk about composition, it’s a little bit more difficult because I cannot go from A to B and then from B to C because I can fall off. So what you now do is make that defect that you can fall off explicit. So, now if I can go from A to B and fall off the arrow and then I can go from B to C and fall off the arrow, well when I go from A to C I can fall off the arrow whether I fall off between A and B or between B and C. So a monad is the thing that models the pollution of the arrow, so all the bad things or additional things that can happen when you traverse the arrow. So this sounds maybe very abstract, but as I said think of exceptions as one way, it’s something that it’s not going from A to B directly but something can go wrong. But let’s look at other examples in computing. I go from A to B and I want to log that I went from A to B, so when I go from A to B I want to log that I did that, I go from B to C then I log that, so when I go from A to C, I want to log the combinations, so there you see you have to compose the effects of going from A to B and from B to C.

That’s the interesting thing, that’s why I have this quest to explain them because it is not that complicated. I think the complication arises from the fact that the language of mathematics or the language of category theory is explained is very particular, so when you look at it as an outsider you have to understand that and they use all kinds of terms like monic and epic, objects and arrows and so on, but if you look at it conceptually there is no catch at all, it’s really I would say sound design. There is absolutely no catch there.

Before I do that, let me take a step back to explain why IO is an issue. Let’s go back to pure functions, where this arrow has no effect, it’s a completely pure arrow. There if I go from A to B or if a have a function and I give it the same argument I always have to get the same result. So, my favorite example for IO is ReadLine, where you don’t give it any arguments and it returns a string. But the thing is when I call ReadLine with the empty argument it gives me a string, I call it again it gives me a different string, and why is that? Because it reads the string from the standard input. So it’s not really a function because every time I call it gives me a different result so it’s one of those functions that has some impurities, if I want to model that I cannot model that as a straight arrow from unit to string, but there is some effect just like the first one where I said where you can fall off, in this case it gets the value from somewhere.

And that’s where the IO monad comes in, it models the fact that when you walk the arrow from unit to string, that the string gets pulled from the standard input and that’s why this thing has type unit to IO of string and the IO then encapsulates the fact that in this case it gets the input from somewhere else. Now the same if you want to write something, reverse that, Console.WriteLine or PrintLine, that goes from string to unit, if you really look at that, what can that thing do because whatever string it gives it always returns the same value namely unit or void, so basically that thing cannot do anything interesting, but it’s very interesting because we use it in programming all the time. So, again it's one of those polluted functions that takes a string, returns unit but has an effect on the outside world, it prints that string on the standard output and so again that’s then modeled by saying this goes from string to IO of unit, so the IO captures that fact that this function or this arrow mutates the outside world, it reads from the standard input and it writes to the standard output.

Ok, so let’s think about that for a little bit. First of all I would say, and that’s just me, I’m happy to think about this purely abstractly, if you tell me I give you a function from A to IO of B where the IO means this thing can affect the outside world, you don’t need to know what IO is, this thing can be purely a token on that function, that just says this is not a pure arrow from A to B, it’s something that does something interesting. Now what you often want is you want to understand how it works, that’s the next step and this is what your question is. You can use this without even caring about how it works because the only thing that you know is you can take something from A to IO of B, something from B to IO of C, and I can go from A to IO of C and you don’t need to know what the implementation is, and that’s the beauty of it, you can say “ok, somebody else figured it out and I can just use it”. Now, with the IO monad I think people often make a mistake that they say “It’s something that passes in the world and mutates it and threads it through”.

The problem there is that you cannot observe the intermediate changes to this kind of thing that you pass around, the stream of bits that you mentioned because in a real program when you write to the output or read from the input you can see that it has changed in the middle. So the proper way or one way to model that is by something called resumption. Resumption is a function that you give the input then it returns a value and another function, another resumption that I can next give to you that you give another input to and that produces the next output, it’s kind of a strange recursive function that is kind of recursive in itself and that models that the computation goes in steps. And this idea of resumptions was first invented by the denotational semantics folks to model the semantics of concurrency, because if you have two concurrent programs they run in steps too, but after each step the other program can observe the changes that the first program made because there are shared states if you have concurrency. So there are many of these monads where you can look at their internal structure, but again I want to emphasize that it’s not necessary at all, you can just say “This is completely abstract and maybe there isn’t even an actual implementation like that”, because you run your program in a different world, in an imperative world, if you take your Haskell program you compile it to machine code there is an implicit effect so you don’t really maybe model your monad as an explicit data structure.

And that’s another thing that I think that beginners get caught up is they want to have an explicit representation of a monad so they want to write a type class in Haskell that says “this is my monad” whereas the more seasoned programmers or if you are in a language that doesn’t have support for monads you use it often as a tool for thought, you do that on a white board as the design, so you kind of know “These are the effects, they are in this part of my program and I can reason about this” and you are careful as a developer that your effects are localized in that part of the program but you don’t need your compiler or your language to support that. And in some sense it’s very similar to any type system; say in Java, you have a type system that says this argument is an integer, well that usually only captures part of your intent because you cannot say that this integer must be even, now you have to add a metadata level to reason about this that this integer is even, or that you are passing an integer and that must be an index into an array, well there’s no way to express that, so I think there are many, many cases that a type system is not powerful enough to express your real intent and then as a developer you use other things like assertions or other reasoning, but there’s no real checks for that and the same holds for monads and effects, you don’t have to have that enforced in your language , it can be in your mind or on the whiteboard.

Yes, that is an interesting view point. One of the things that if you look at object oriented programming and functional programming in my view they are not that different. If you look at functional programming there, as you say you think in terms of functions and then these functions often, if you think about purely functional programming, you think about the effect of the function, because it’s not a pure function, it has some kind of dirt associated with it. Now, what object orientation does is it does something very similar but it groups together functions that have an effect on some states but it groups those functions that have an effect into an object. But it’s kind of the same idea, you want to encapsulate the effects that you have in this smaller unit that you then can reason about. In an object the only way you can change its instance state is by calling methods on it, that’s the Holy Grail, that’s why people say you shouldn’t have static methods or something like that because you can now reason changes to the state by looking at all the methods. That’s similar to what functional programmers try to do, they are also trying to reason about what are the effects but they do that by looking at the monads. So, everybody is trying to do the same thing, you’re trying to not have these things spread out over your whole system but you want to be able to say now, this is the part from my code where I care about this thing and then I can just glue things together without really having to look inside of things. And this is why I was saying that maybe you shouldn’t want to understand how the IO monad is implemented just as when you are using objects it's none of your business how that object is implemented, you should just use the methods that it provides.

Barry: Category theorists don’t like to look inside things.

That’s the whole point, once again for me that’s the essence of programming as an interface, that’s a promise that’s saying this is the contract and I can have different implementations, but you shouldn’t care because you only care about the interface. And I think that’s the only way you can make progress because you have to abstract from the non-essential details, that’s what abstraction means, leave out non-interesting details and then only the essence is captured in the interface. And the same with monads because typically, that’s another thing that I didn’t explain yet, if you look at the IO monad it’s not the definition of the IO monad that is interesting, it’s the operations that have, as you say, IO in the arrow that are interesting. For example, the fact that the IO monad comes with ReadLine and WriteLine and these are so called non proper morphisms that are really the methods in the objected oriented sense of the monad, every monad comes with a set of operations that are particular to the monad and those are the interesting stuff, not how the monad itself is represented or implemented, just like with an object, I give you a button, it has “click” or whatever, who cares how this button is implemented, sometimes you want to know but in most cases you don’t.

I think this is the funny thing, I think with lingo, it’s called non proper morphism because it’s not one of the standard operations of a monad. So, a monad is like a generic type with a few operations typically in Haskell, they are called return and bind, so those are the proper morphisms. And then there are the non-proper morphisms which are the ones that are specific for that monad. So, think when you are inheriting from a class the proper methods are the ones of the base class and the non-proper methods are the methods of the derived class. So, non-proper means derived methods and proper means base methods and nothing more than that. But it sounds cooler, non-proper morphisms.

Barry: I’m used to variations in terminology like that, but we did have a speaker here, an interviewee a few days ago who said there are things about the pure functional world that aren’t quite as rosy as they are advertised to be, perhaps monads are a way of doing side effects without making it look like burying side effects under the rug or something like that, I just want to get your opinion about that before we finish.

I wouldn’t phrase it like that at all. So, it’s exactly not shoving it under the rug. What you do in a pure functional language is you have to be explicit about everything, so you cannot shove anything under the rug, if you have any side effects, you have to be explicit about them and you have to deal with them.

And then the way to make them explicit is by building them into the type system such that you can still reason about your program. But I would say in an imperative language that’s where you can kind of cheat because you can have effects but you don’t have to talk about them, if I give you a Java method that takes an int and returns a bool it looks as if it’s a decent thing, but it can do all kinds of things, it can throw exceptions, write to files, so that’s not apparent in the type, so what effects it’s got it’s not apparent in the type, it’s kind of implicit. So, in the Haskell world what you do is say to start off with you cannot do anything except pure mathematical functions, and when you want to do more, you have to be explicit about it by saying that in the type, by putting this annotation in the type to indicate that this thing is not a regular arrow but it does something extra, so I think that’s a good thing. Now, the real dirty secret is, I think, that when you are programming that is often very painful, because we like to cheat or we have to cheat and sometimes it’s ok, you say “Look it’s ok, I know what I am doing” and in Haskell that is harder. So, for me Haskell is a great language as a way of thinking, but I prefer to program in a language where I can cheat.

My favorite is still C# because I was also involved in the design so it’s molded to what I like, but it also has a lot of advanced features like async support and so on. But I must say Java is catching on with Java 8 with closures, Scala of course is a super interesting language, PHP I must say it’s quite interesting.

F# is kind of a special case. With F# it’s a very interesting language in the sense that it puts a lot of these concepts into the .NET world, so that’s another one, but I could go on and in some sense as a language guy I love every language.

OCaml and F# are kind of similar, even JavaScript no matter how many bad parts it has it’s interesting, TypeScript, Clojure, I think it’s a great time to be a programmer because there are so many different languages, I cannot just name them all, Python, Ruby, whatever and each of them has their interesting aspects, but you feel most comfortable in one of them. But I think if you look at a lot of these languages, they’re not that different, so if you’re a really good programmer you can switch languages and I think that’s what developers should try to do, don’t get attached to one language, I am too much attached to C#, I really need to detach myself from it because I am super comfortable in it, but if you look at it from a little bit of a distance most languages are more similar unless you go to say Prolog or Datalog or Haskell. But the rest, F#, Scala, C#, JavaScript, PHP, they are all languages that have objects, that have mutable state, that have closures, Clojure is maybe somewhere between Haskell and that, so it might be interesting to make a map of where they relate and how they cluster. But just pick whatever and try to use as many languages as you can, that would be my advice and don’t get attached to it, don’t be like me or you’ll get attached to something, just be free.

All Turing complete languages are equal once you see it
by
Eric Aguiar

When it comes to things, ALL languages are equal and basically the same if they are turing-complete. 2+2 always = 4 and the machine always does the same thing at its very lowest level, so regardless of how high-level the syntax lets you think, the underlying VM which does the actual execution all boil down to the same basic concepts of how things work. They have strengths and weakness' depending on their focus or area of interest is all. But you can do anything in a language if it's turing complete just as any other language and you'll see it once you learn enough of them to make the connection.

Re: All Turing complete languages are equal once you see it
by
Faisal Waris

All languages can solve the same problems but the degree of expressability, safety and performance differ markedly. Usually there is a trade-off and so one should pick the language suited to the job at hand.