There has to be another way: my entry in the FP vs. OO debate

As articles go it was mostly fine. As most articles do it made a few nice points and a few points that I wasn’t awfully convinced by. (If you ask me the increasing acceptance of lambdas, or anonymous functions, in languages that are otherwise “purely” object-oriented is by no means indicative of a failure on the part of the OO philosophy, especially when you consider that closures and objects are basically two sides of the same coin, and that in many languages [see Java] lambdas are really just syntactic sugar over the instantiation of an object which might otherwise be more cumbersome. I agree though that composition is better in almost all practical respects than inheritance.)

Having said all of that, what I’m writing here won’t really be about that. It’s going to be about a section in that article that I find somewhat more troublesome, entitled “the multicore revolution”. The crux of it is that object-oriented programming has a certain focus on mutable state which leads to big problems in a multi-threaded context. An uncontroversial opinion, this one. Threading is hard, threading with shared memory is really hard, and threading with shared memory with mutability is both harder and less fulfilling than the Japanese version of Super Mario Bros. 2. I don’t aim to disagree with this and I don’t think anyone else does either.

What I disagree with is the conclusion this observation led the author to. Quoth the article:

On the other hand, forbidding mutable state and other such side effects without changing anything else is crippling. You need a whole new bag of tricks. Persistent data structures, so you don’t have to copy the state of your entire application for every little update.

Mkay, right with you. We need some help outside of the traditional OO toolset.

Standard higher order functions such as map, fold, and filter, so you have an alternative to most of your for and while loops without resorting to the recursive nuclear option. Algebraic data types, so you can have functions that fail without resorting to the harder to tame exceptions. Monads, so you can chain such computations without having to write annoying chains of if-else expressions…

Oh, my god. What just happened? I was just trying to read this article about parallel programming and then Simon Peyton-Jones threw up all over my monitor. The prose just turned into a list of Haskell’s features. The m-word was spoken.

See, the article falls into the same sort of trap that I’ve seen hundreds of articles about functional programming fall into. It’s the Haskell Trap.

The Haskell Trap: the tendency of some to take it for granted that FP is the opposite of, and the only alternative to, OO, and that Haskell in particular is exactly equivalent to FP.

You can find more examples. One that comes to mind immediately is this one. Now, Erik Meijer is a super-smart guy. He is far more experienced and knowledgeable on this topic than I am and will ever be, at least for a very long time. But I’m not convinced by this article that Erik didn’t fall into the Haskell Trap either. He brings up a slew of legitimate qualms about mutable state and OO (they make compiler optimizations more difficult to do, complicated constructor logic can have surprising effects, and so on and so forth) before arriving at a somewhat startling conclusion:

Instead, developers should seriously consider a completely fundamentalist option as well: embrace pure lazy functional programming with all effects explicitly surfaced in the type system using monads.

Ack. Is there really no other choice? Do I have to program functionally? Why on earth must everything be lazy; didn’t we agree that wasn’t the most sensible default? And why are you forcing me to think about monads without my consent? I should make it known here that there are other solutions to implementing side effects in a pure functional language. Uniqueness types are one of them, and they’re actually really intuitive and neat. Meijer dismisses them without any real consideration, saying you need to have a “Ph.D in theoretical computer science” to understand them. No one tell Meijer that Haskell and its monads have the very same reputation or he might be really upset.

Anyway, the Haskell Trap itself is sort of beside the point. Here’s the point: while these criticisms of OO and of unchecked mutable state in general are valid and Haskell provides an adequate solution to many of those problems, Haskell shouldn’t be taken as the be all, end all of sensible computation. Haskell isn’t the only solution. It can’t be. There has to be another way.

Coming out

Okay, so, um, this brings me to a somewhat uncomfortable part in the essay. I knew this was coming but it’s just… kind of impossible to prepare for, you know? I want to thank all of you for being so supportive of me. So, um, it’s time for me to, uh, come out.

I don’t like Haskell.

… there, I said it. That’s it. It’s out in the open now and there’s nothing I can do about it. Thanks again for bearing with me.

So, yeah, I don’t like Haskell. There it is. It’s just not my cup of tea. There are a lot of reasons for it; I don’t want to get too much into it right now. Basically the IO type seems like a really clumsy solution to the problem of state for me, and monads in general rub me the wrong way. do blocks really bum me out, and maybe most importantly the things that the Haskell community seems to value (abstraction, burritos) are not the things that I personally find interesting in programming and the features that “unconventional” programming languages can provide.

But it’s not just that. Honestly I don’t like functional programming that much, either. There are definitely some advantages it confers and it makes the expression of certain ideas really natural and easy, but that’s by no means a universal thing for me. If I’m trying to express an operation on a recursive data structure that isn’t just a simple map, filter, or fold, things can get kind of messy. Often I have to contort my ideas to fit the model and express things in a recursive fashion that doesn’t correspond well to my intuitive understanding of the problem. To define a function I often have to locally define a tail recursive helper function with confusing parameters. Another quote from someone smarter than me:

But when it comes down to it, I am an object-oriented programmer, and Haskell is not an object-oriented language. Yes, you can do object-oriented style if you want to, just like you can do objects in C. But it’s just too painful. I want to write object.methodName, not ModuleName.objectTypeMethodName object. I want to be able to write lots of small classes that encapsulate complex functionality in simple interfaces – without having to place each one in a whole separate module and ending up with thousands of source files.

I would agree with all that. (By the way. Don’t try to tell me I just need more experience with functional programming. I’ve got plenty; my first language was a functional one.)

But here’s the interesting thing: I kind of just realized this. I used to think I loved functional programming, languages like Standard ML. But the more I used these languages, the more I realized how far off I was. I don’t love Standard ML itself; in fact, at its worst, I find it cumbersome and unintuitive. But there are things about Standard ML I do love. Algebraic data types, the focus on immutability and persistence. Pattern matching. The strong type system. The simple, sensible semantics. But here’s the thing: none of that stuff that I love has anything to do with functional programming. They could happily exist in a language that isn’t functional, or purely functional, or almost purely functional, or whatever. You could lift up all of those features and plop them in a language with more traditionally imperative semantics and everything would be peachy.

Same with Haskell. A language can be “object-oriented”, or procedural or C-like or whatever, while still having a strong type system that captures effects, informs the programmer about intent and behavior, and allows the compiler to make certain optimizations. This feature doesn’t have to be specific to functional languages, and in fact it explicitly is not tied to functional languages.

You know what? I get it. My criticisms of Haskell are shallow. These are lame reasons to dislike a language, on the whole. Don’t get me wrong, I respect Haskell. I respect Haskell like I respect PepsiCo (a company which has achieved admirable success despite being wrong-headed about some things in my opinion, particularly because I view them as inferior to a certain other soda manufacturer). Its advantages are obvious and great, it is staunchly principled, and undoubtedly mind-bending. But it’s not the kind of language I want. Now, I could learn to get around the fact that I don’t much care for Haskell and learn to write idiomatic Haskell code in a rather efficient manner (in the same way I did with Python) but that’s just the thing: I shouldn’t have to. Haskell isn’t the only solution to this problem. There has to be another way, because if there isn’t I’m going to be seriously bummed.

What does the alternative look like? I’m not really sure yet. I’ve been thinking about it. There’s an effect system, a capability system. You should be able to glean the behavior of a function from its type. It should be a compile time error if the code tries to have an effect that it doesn’t have the capability to; you should be able to pass capabilities through method calls and, in a multithreaded context, between threads. Compiler optimizations should understand the effects and how they interact and be able to make judgments about what transformations are legal in a sensible way given the effects associated with each function. Effects should interact and interplay in a sane way (e.g. if there’s a mutable reference type, then any number of “read” capabilities can exist, since any number of threads can read a reference at the same time, but no other “read” capabilities can exist if a function has a “write” capability, since you don’t want anyone else reading a reference while you’re writing to it). Effects and capabilities should be extensible so that users can declare their own effects and have the compiler treat them correctly.

All of this is really abstract; I’m limited by a very small number of existing implementations and a comparatively small amount of existing research (all of which looks pretty dense). But it’s just an idea; people will have others. That’s the point: Haskell’s only one way of many to do this.

Like this:

4 responses to “There has to be another way: my entry in the FP vs. OO debate”

While not quite as sophisticated as you’d like in terms of effect typing, it sounds like you might like Rust.

Although I have to disagree that “object.Method” is preferable to “Method object”. Even conceptually, it’s nonsensical for single-receiver objects to support binary methods. The preferred-receiver paradigm just breaks down too quickly because selecting overloads based on multiple arguments is just necessary.

That said, the syntactic convenience of being able to place the verb after the subject is very nice, as C#’s LINQ extensions demonstrate. But Haskell can do this as well with it’s `function` quotation syntax.

I think Ricky isn’t complaining about “Method object”, but Haskell’s terrible record syntax. If you have two records with a “method” field, the compiler can’t tell them apart, so you end up either prefixing every field with the name of its containing type, or every record goes in its own module.

I think you are spot on about the ‘haskell trap’. It’s not like we have discovered all possible programming language models already. I also find that the forced tail-recursive solutions that function programming may impose are often not intuitive.