Last time, I was looking into establishing equality on various conditions on Wai.Request, but established that this wasn't what I was looking for. We did, however, establish how to perform casts and use polymorphic lists in a fashion that's quite OO. Now I'm planning to drive right off road and try a bit of type-level reasoning.

Let's start by simplifying the problem we had last time. Let's stoop worrying about complex record types and just deal with primitive types. We'll restrict our attention to equality and comparison conditions on those types. Let's start by setting up some machinery.

-- I could make all functors of a invertable invertable, but I'm not sure that would actually be a good idea.instance (Invertable a) =>Invertable (Maybe a) where
invert = fmap invert

-- Utilinfixl3<|!> (<|!>) ::Maybe a -> a -> a
(<|!>) = flip fromMaybe

Now, we're going to have a Condition typeclass, and at the very least we're going to have instances for "always true/false", "test for equality/inequality" and "compare against value". And here's the important bit: we're going to want to analyze the relationship between them, even if they're not the same type.

Young Rankenstein

Where we'd return Nothing if c didn't know how to analyze its relationship with d. Using the cast mechanism we've already seen, you can definitely implement this. And indeed, I did. (I tried an approach involving a more symmetric approach and some type magic, but ultimately couldn't get it to fly.)

Let's revise the definition a little so that we can actually it to test values. But we're going to have to introduce a second type, v, the value under test.

Now, there's actually a serious problem with this code: it doesn't even compile! The problem is with the vs in analyze. It can't determine that they're the same. This I actually find weird, given that I've specified that a and b share a v, but it's solvable.

First, I want to talk a bit about what the rank 2 typeclass actually is. It specifies a set of functions that can be called with an a and a v, but doesn't restrict the a or the v in any way. So, all it's really giving you is a relationship between the two types. And analyze never uses v, so it can't deduce anything about it.

Now, there's an extension called FunctionalDependencies and another called TypeFamilies that'd resolve this, but actually all we need to do is take the test method back out.

That works, doesn't use any more extensions, (although Lord knows I've been playing extension pokemon recently) and also leaves us with the possibility of using different vs with the same a, even if in this particular case I can't see why we'd wish to. For that matter, we could remove the interdependency between the two.

classCondition4 v a where test :: a -> v ->Bool

I found it really interesting the process I went through here: we actually ended up with a better design and separated our concerns as a consequence of the type system complaining about the functions we actually implemented.

I'm using the lens package here, although to be honest I'm really only using it to start learning it. The actual practical benefits of it in the code I've written are very small, but I'm hoping to slowly pick up more aspects. In fact, of the code I've written so far this is the only bit that actually shows an improvement.

Breaking it down, we're saying that if a condition is invertable, a Value using that condition is invertable by inverting the condition. Even though this is pretty elegant, but it's going to take me a fair while to get my head around lens in general (There's been loose talk of a lens NICTA-style course, that would be awesome.).

Makes sense. Doesn't compile. The reason's a bit weird: it can't figure out exactly what to cast y to. So let's try this:

where y2 = (cast y ::Maybe (ValueEquality a))

Still doesn't work. Here, the error message isn't particularly helpful (unlike quite a few that just point you directly to the extension that you might need). The problem is actually that the a in the y2 expression isn't the same as the a in the instance declaration. I don't really understand why that decision was made (the explanation probably features the word "parametricity"), but you can reverse it by adding in another extension:

{-# LANGUAGE ScopedTypeVariables #-}

QuickCheck Yourself

There's a plethora of things we could test, but let's start with this one: What actually is the relationship between the Analyzeable version of condition and the Testable version of condition? Well, the answer is approximately that given two types that are both, we should be able to pick a set of vs such that we can deduce the behaviour of one from the other.

It would be lovely if we could achieve this through the type system, but I think that would be a serious reach, and even if it was possible it's doubtful it would be readable. So instead let's try using QuickCheck, the original property testing tool.

Aside: conversely, there should be no value of v where the behaviour of the two contradict one another. However, this latter condition is kind of hard to demonstrate using any example-based system. For that, you really do want Idris.

This introduces a new target called test. I've not used quickcheck before so this is quite interesting. tasty appears to be the standard test running infrastructure. Note that since the test code is actually a separate executable, you need to put your own code as a dependency.

Stuck In The Middle With You

So, let's pick five distinct values, a, b, c, d and e. We'll put conditions at b and d and then test all five values in pairs. We can then read out from the set of pairs what the correct relationships between the two conditions is.

(There may be a better way of instantiating propDeduce with different types, but this definitely works.)

In practice, what now happens is you spend a large amount of time actually fixing your code and your tests. What you're seeing above is the output of that process. I learned a few things in the process.

Although QuickCheck is good at telling you there's a problem, it's got no facilities at all for telling you why.

Having relatively complex types makes it quite hard to reproduce a test in the repl. Conceivably the tooling for this could be improved.

You need to split your code up into chunks that are testable in the repl. This is a lesson Clojure taught me as well, but having access to an excellent debugger in other languages keeps unteaching it.

This is getting really long: I've skipped over the entire tasty code and the entire implementation.

Review

So the Condition design looks more appropriate to the aim of actually allowing us to optimize our tests, and Haskell's led us to a typeclass design better than the original item. There are, however, certain problems. For instance, the design I've outlined here is incapable of spotting that "> 2" is the same as ">= 3" in the Integer domain. Pretty much the only good solution to this is to require stronger conditions than just Eq and Ord for condition values, which allow you to perform these analyses. I'm not very inclined to do that, but this problem doesn't ruin my intended use. However, it highlights again just how challenging it is to write something truly polymorphic and correct.

It's pretty easy to see how you can extend this into projections as well. However, in practice it gets pretty tricky, because you need to do an order 2 cast. Thankfully, I got a good answer on StackOverflow of exactly how to achieve that. Separating out the concept of the condition from the projection also seems like a strong idea. Ultimately, though, I don't really like the way this is going. Casts work, and Maybe makes them safe, but the design feels like I'm circumventing the type system rather than using it.

TL;DR I continue trying to implement a routing library, but instead end up learning about Typeable, writing about orphan instances, reading and (so far) failing to understand type-magic and sending my first Haskell PR.

I remember when I was starting Clojure, one of the big catchphrases was that everything was opt in. A type system, inheritance, multiple dispatch, &c. On the other hand, there were actually plenty of things that weren't opt-in: Java itself, polymorphism, reflection and so on.

Haskell is another opt-in language. The basic type system and language is a requisite, but there's still a phenomenal number of things to opt into. Equality is opt in, Hashable is opt in, as we saw in the previous article, polymorphism through existential types is opt-in. Next, we're going to see "opt-in" type casts, and hopefully you'll see how they're better than what you can achieve in Java or C#.

So, the question I asked last time was, how can I tell if two RequestConditionBoxs are equal? To do that, we're going to want to make RequestConditions themselves implement equality.

(As an aside: the whole of the functionality of the last post might have been better implemented using good old functions or possibly the reader monad. However, I always wanted the conditions to be instances of Eq and Show. That's not going to be possible with that approach.)

Oops, that isn't going to work: you can't derive Show on a GADT. So delete it. We'll need to implement Eq and Show for RequestConditionBox. (I'm going to skip Show.)

instanceEqRequestConditionBoxwhere
(==) (RC a) (RC b) = a == b

Small problem: a and b are different types. And Eq only allows you to test that two members of the same type are equal. We need some way of checking that the two types are equal. Now, you can test for type equality in a type precondition but I can't see how I could make that work. We need something more like

testEqual :: (Eq a, Eq b) => a -> b ->Bool

Only right now we have no idea how to implement it.

He's My Typeable

George Pollard pointed me to an experimental class called Typeable. As I alluded to earlier, it's opt-in, although I think the opt-in nature is more to do with the fact that it's not standardized yet than that there are types that can't logically have a typeclass instance.

Typeable looks like a pretty unpromising typeclass:

typeRep# :: Proxy# a -> TypeRep

Actually, it's more than just unpromising, it looks positively hostile. What are those hashes? Well, it turns out that hash is a valid character in an identifier if you enable the MagicHash extension. As a convention, GHC uses it to represent unboxed types. Unboxed means exactly the same thing as it does in C# and Java: something that doesn't have a garbage collected pointer around it. This is a very deep rabbit-hole that I'm just going to carefully step around right now.

Actually, I'm going to skip pretty much everything except to notice that Data.Typeable exports a rather useful function called cast.

cast :: (Typeable a, Typeable b) => a ->Maybe b

Yep, that's exactly what as does in C#. I'll skip over the implementation, because it's slightly scary and I'd need to get into unsafeCoerce. One thing I can't tell is if this code is actually run at runtime or whether it's possible for the compiler to optimize it out. After all, the types of a and b are known at compile time.

With that, we can actually test if two values of different types are equal:

Orphan Black

How do we implement it? Well, we don't. Typeable is special. Not only is it derivable, the compiler requires you use the deriving version. And that needs an extension:

-- Put this up at the top{-# LANGUAGE DeriveDataTypeable #-}

newtypeAnd rc =And [rc] derivingTypeable

Unfortunately, H.HttpVersion doesn't implement Typeable. Luckily we can implement it ourselves. But, you guessed it, we need another extension:

-- Put this up at the top{-# LANGUAGE StandaloneDeriving #-}

derivinginstanceTypeableH.HttpVersion

We're probably alright here, but what we've done is, in general, ridiculously dangerous. We've implemented an instance in a library that is neither the library that declares the typeclass, nor the library that declares the type. This is known as an orphan instance and will have seasoned Haskellers gathering with torches and pitchforks around your codebase. The reason for this is that, while typeclasses provide the power of ruby's mixins, orphan instances provide the problems. (They call it "incoherence", and they mean it.)

While we're on the subject, you'll probably have already noticed that when you add projects into your cabal file, you pull in the world, Maven style. This is pretty horrific, but the reason for this is orphan instances. For instance, the functionality of the semigroups package looks pretty small: it just exposes a couple of typeclasses. But when you take a look at what is an instance just of Semigroup you'll see a whole list of types that the semigroups package needs just to compile. Semigroups itself has defines to try to ameliorate this situation but the truth is that it's just too much work (at least given cabal in its current design) to enforce small dependency lists and coherence.

Long story short, it'd probably be best to just expose Typeable from the library, so I've sent a pull request. (As everyone knows, open source software collaboration is a variable experience. But even at my beginner level, it is possible to make small contributions.)

The Equalizer

Remember last time I mentioned that we could destructure existential types? Now we can actually use this.

Well, that's demonstrated that Eq works. But it also demonstrates something else: Eq isn't actually what we wanted in the first place. Really we want to be unifying to [RC "GET",RC HTTP/1.1]. To do that, we're going to have to rip up everything we've done so far and start again.

FOOTNOTE: Elise Huard pointed me to the AdvancedOverlap page on the wiki, which details techniques for branching your code by typeclass rather than type. In practice, I decided to just make everything an instance of Eq, which isn't so much of a problem given the problem domain I'm working within.

TL;DR I start trying to write a library and get sidetracked into learning about Haskell's type system.

So last time, I talked about Wai and how you could use it directly. However, if you're going to do that, you'll need a routing library. So, let's talk about how we could build one up. One of the first things you'd need to do is to provide simple boolean conditions on the request object.

It turns out that this raises enough questions for someone at my level to fill more than one blog post.

So, how should we define conditions? Well, the Clojure model of keyword and string isn't going to work here, because the Wai.Request object is heavily strongly typed. So how about instead we just use the expected values and deduce the key from the type?

So, we're going to want to implement the same method for several different types. There's several different ways of doing that: * Create a union/enum class. This is a good approach, but not extensible. * Create a typeclass, which is extensible. * Create a type family, which is also extensible, but I don't really understand.

You Can't Buy Class

With that in mind, let's create our first typeclass!

classRequestCondition rc where isMatch ::Wai.Request-> rc ->Bool

So, in English this says "If the typerc is a RequestCondition then there is a method isMatch which takes a Wai.Request and an rc and returns a Bool." This is pretty interesting from an OO standpoint. The OO representation would look like rc.isMatch(request). A Clojure protocol would change this to (isMatch rc request). In practice, it doesn't matter: what's happening is that there's dynamic dispatch going on on the first parameter.

In the Haskell case, there's no dynamic dispatch in sight and the first parameter isn't special. isMatch on HTTPVersion and isMatch on Method are different functions.

We can now implement the RequestCondition for some obvious data types.

So, here we've said "calling isMatch with a HttpVersion as a parameter calls (>=) . W.httpVersion i.e. checks the request is using the version specified. We'd probably need a more sophisticated way of dealing with this if we were writing a real system.

This is much the same, with one wrinkle: H.Method isn't actually a type. It's a type synonym. In C++ you'd introduce one with typedef, in C# with using. Haskell, because it likes to confuse you, introduces something that is not a type with the keyword type. If you look up method on Hackage you see:

typeMethod=ByteString

You might wonder why this matters. The answer is that the Haskell standard doesn't allow you to declare instances of synonyms. You can understand why when you realize that you might have multiple synonyms for ByteString and shoot yourself in the foot. However, for now I'm going to assume we know what we're doing and just switch on TypeSynonyms in the header.

We'd need (a lot) more functionality regarding headers, but let's not worry about that now. However, again this will fail to compile. This time H.Header is a type synonym, but a type synonym for a very specific tuple.

typeHeader= (CIByteString, ByteString)

Problem is, Haskell doesn't like you declaring instances of specific tuples either. This time, you need FlexibleInstances to make the compiler error go away. To the best of my knowledge, FlexibleInstances is much less of a hefalump trap than TypeSynonyms could be.

Under Construction

How about when we've got multiple conditions to apply? Well, if we were writing Java, we'd be calling for a composite pattern right now. Let's declare some types for these.

newtypeAnd rc =MkAnd [rc]
newtypeOr rc =MkOr [rc]

I described newtypes back in Fox Goose Corn Haskell. Note that there's no reference to RequestCondition in the declaration. By default, type variables in declarations are completely unbound.

Before we go any futher, let's fire up a REPL (if you're in a Haskell project right now you can type cabal repl) and take a look at what that does:

data And rc = MkAnd [rc]
:t MkAnd
MkAnd :: [rc] -> And rc

Yes, MkAnd is just a function. (Not exactly, it can also be used in destructuring, but there isn't a type for that.) Let's try expressing it a different way while we're here:

:set -XGADTs
data And2 rc where MkAnd2 :: [rc] -> And2 rc

(You'll need to hit return twice) Now we're saying "And2 has one constructor, MkAnd2, which takes a list of m. The GADTs extension does way more than this, some of which I'll cover later on, but even then I'm only really scratching the surface of what this does. For now I'll just observe how the GADTs extension provides a syntax that is actually more regular than the standard syntax.

Incidentally, I could have called MkAnd just And, but I've avoided doing so for clarity.

Composing Ourselves

With the data types, we can easily write quick functions that implement the RequestCondition typeclass.

The most interesting thing here is that we haven't said that And is an instance of RequestCondition, we're say that it is if its type parameter is an instance ofRequestCondition. Since data types normally don't have type restrictions themselves, this is the standard mode of operation in Haskell.

So, now I can write

Or [H.methodGet, H.methodPost]

and it'll behave. So we're finished. Right? Not even close.

What if we wanted to write

And [H.methodGet, H.http10]

It's going to throw a type error at you because HTTP methods aren't HTTP versions. If you take a look at the declaration, it says "list of rcs that are instances of RequestCondition" not "list of arbitrary types that are instances of RequestCondition". If you're used to OO, (and I have some bad news for you if you're a Clojure programmer, that means you) this makes no sense at all. If you're a C++ programmer, this is going to make a lot more sense. You see, when you do that in Java you're telling Java to call through a vtable to the correct method. Haskell doesn't have pervasive vtables in the same way. If you want one, you're going to have to ask nicely.

Pretty Please and Other Existential Questions

What we want, then, is a function that boxes up a RequestCondition and returns a type that isn't parameterized by the original type of the RequestCondition. What would that function look like?

boxItUp :: (RequestCondition rc) => rc ->RequestConditionBox

Hang on, that looks like the type of a constructor! Except for one really annoying little detail: as I said before, you can't put type restrictions in data declarations.

RequestConditionBox is what's known as an "existential type". As I understand it that should be interpreted as "RequestConditionBox declares that it boxes a RequestCondition, but declares nothing else". So its quite like declaring a variable to be an interface.

Since I wrote this, I've learned that existential types are indeed very like interfaces in C#/Java: they are bags of vtables for the relevant type classes. They don't expose their parameterization externally, but destructuring them still gets the original type out. This is bonkers.

And the compiler will finally accept it. Not quite as pretty as in an OO language where polymorphism is baked into everything, but keeping the character count low isn't everything. We've traded implicit polymorphism for explicit polymorphism.

So we're done, right? Well, we could be, but I want to go further.

The Power of Equality

If you take a look, what we've built looks very much like a toy interpreter (because it is one). What if we wanted a toy compiler instead? In particular, imagine that we really were building a routing library and we had thousands of routes. We might want to only check any given condition once by grouping, for example, all of the GET routes togther.

Now, you could leave that to the user of the library, but let's pose the question: given two RequestConditions, both of which may be composite, how do you determine what conditions are common between the two?

One route is to backtrack, and look at HLists. I think that's probably an extremely strong approach, but I really haven't got my head around the type equality proofs-as-types stuff. Another approach is add some stuff to RequestCondition to track the types in some way. It turns out there's a way to get the compiler to do most of the work here, so I'll talk about that next time.

FOOTNOTE: On the Reddit discussion it was pointed out that RequestConditionBox is an example of the existential type anti-pattern. To summarize: if all you've got is a bunch of methods, why not just have a record with those methods as properties? If all you've got is one method, why not just use a function.

This is a completely valid criticism of the code in this post as a practical approach. However, we wouldn't have learned about existential types in the first place, and we couldn't make functions implement Eq and Show. Implementing Eq is the subject of the next post.

The commenter also added an elegant implementation of the functionality given above in terms of pure functions.

EDIT: Lennart Augustsson clarified that existential types do indeed construct vtables. So "boxing" something in an existential type is very like casting a struct to an interface it implements in C#. I should also clarify that the word bonkers used in the above text was meant as a good thing. :)

Wai Wai Pom Pom Pom

Snap and Yesod are both "big" web frameworks. Of the two, Snap aims to be the smaller. Both have their own web server, templating system and so on. Both are sufficiently complex to need a program to set up a starter project. Both have fairly sophisticated monad stacks to understand. They're also both phenomenal high-performance pieces of engineering.

What this means for a beginner is that you're going to spend as much time trying to get to grips with the framework as you are learning how to use Haskell. If like me, you're coming from Clojure, they both feel a bit more like Rails than Compojure.

So, are there simpler to understand models out there? Well, the equivalent of Compojure/Sinatra is Scotty. But I found the next level down again more interesting: Wai.

Wai corresponds most closely to Ring or Rack. It was intended to be a common API that Haskell web servers could expose. In practice, it's only Warp that really supports it. However, Warp is a damn fine web server so that shouldn't hold us back too much. Nearly every Ring app runs Jetty and hardly anyone really worries that the "standard" isn't as portable in practice as it is in theory.

Setting up Hello World

To start, create a new directory. For our purposes we'll call it "example". Then we set up a completely blank project.

The "sandbox" and "wget" lines I'll gloss over, but they basically constitute the best way I know to avoid what's known as "cabal hell". And believe me, you don't want cabal hell.

When you run the init command, you'll be asked a whole bunch of questions. The defaults are fine, just make sure you specify you're creating an executable. It'll create a file "example.cabal". You then need to go in and make it look like this:

There's two import edits here. The first is that we specify hs-source-dirs. The default is that the Haskell files are dumped in the project's root directory, which is a lousy default. The other is that we set up our dependencies: wai, warp and http-types. Wai and http-types from our API, Warp our implementation. Note that dependencies are case-sensitive.

You may also be wondering why I haven't specified any version constraints. That's because we've set them up in the cabal.config instead. Welcome to the new world of LTS Haskell.

Writing Hello World

mkdir src
cd src

Now create Main.hs.

{-# LANGUAGE OverloadedStrings #-}

We need this because Wai uses ByteStrings rather than Strings, and overloaded strings makes using them lower friction.

So, it's a alias for a type of function. However, the type's way more complex that we were expecting. What were we expecting? Well, in Ring the type's more like

typeApplicationRing=Request->Response

Take a request, return a response. However, in order to allow for correct resource management, it uses a continuation passing style instead. (I'm hoping to expand on that in another post.) So instead, you need a callback. As you see, we called that respond.

What's respond's type? Well, it's got to take a response. At this point I hit the limits of my understanding. I'd have made the function return (), but instead it returns ResponseReceived which appears to be a placeholder type. Finally, obviously respond is going to have to write to a socket, so it's going to have to incorporate the IO monad. Now, in most of the more complex APIs, what you find here is a monad transformer stack with IO somewhere in the mix. In Wai, you just get a naked IO ResponseReceived and can build your own later.

To summarize, the type of respond is Response -> IO ResponseReceived and that means "when you call it with a response it will do some IO and return that it's been processed`.

Finally, Application expects IO ResponseReceived to be returned from the function. I believe this to be practically motivated: nearly every handler is going to want to call respond as the last thing it does, and this means that the types work when you do that.

You Had Me At Hello

To unpack this: when you receive a request, respond using status 200 (success), no headers ([]) and byte string "Hello World".

So, that's about the simplest thing we can possibly do without writing our own web server.

Let's Be Frank

So, how does this compare to Sinatra's famously good home page? Well, for a start we have three files instead of one. However, two of those files are devoted to ensuring that our dependencies don't mess us around, if you want to do the same in Ruby, you'll be setting up bundler, using a gemfile.lock in addition to your normal gemfile, so three files again.

Haskell actually comes out slightly ahead here if you're willing to forgo some flexibility, in that the cabal.config is repeatable and upgrading is a matter of trying a new cabal.config/ reverting if it doesn't work.

In comparison, bundler generates a lock file dependent on your current gemfile. If you need to add another library later, it's up to you to figure out which versions are compatible with your code.

On the other hand, if you need more flexibility, you're going to encounter cabal hell pretty quickly. Good luck.

We've got three dependencies instead of one. That's a pity. But it comes from the two sources:

We've got to import types declaring interfaces as well as just implementation code.

We don't have the web server appearing by magic.

On the other hand, Sinatra's actually provided a routing library, and we don't have one yet. But we could have used Scotty instead.

Keep On Running

So, let's see it in action. Get back to the root project directory and type

cabal install && dist/*/build/example/example

and navigate to http://localhost:3000/. Hey presto, you've served a web page. Looking at the headers, all that it's specified is a Date, the Server and Transfer-Encoding, so we'll definitely have a bit more work to do to for a full experience.

FOOTNOTE: I'm quite pleased with the response this article had on reddit. This discussion is quite interesting and I recommend reading it.

FOOTNOTE: Quite a few people have remarked that the comparison section isn't really fair on Haskell in that I've implemented something at the Rack/Wai level, rather than the Sinatra/Scotty level, which is true. However, I wanted to use Wai rather than Scotty to avoid going into monads and monad transformers and ultimately, I think the Haskell one is still quite concise and beautiful in a very precise manner.

EDIT:A number of people have pointed out that modern ruby is indeed capable of precise version locking. I've updated and expanded the comparison to reflect that.

This is my attempt at a solution to the fox/goose/corn problem in Haskell. It was inspired by Carin Meier's Clojure Kata for the same problem, although it deviates from the approach. A better Haskell developer might significantly improve on my version. I didn't find much use for the standard typeclasses in this, sadly. As a consequence, however, the code is relatively understandable from the perspective of a Clojure programmer with no Haskell experience.

I'll explain each construct as we encounter it.

Preliminaries

First, we have the namespace declaration. Unlike Clojure, we need to declare any identifiers we export. Since we're writing an executable, we export main just as we would in C.

moduleMain (main) where

Data.Set exports a lot of things with the same names that Data.List exports, so it's pretty common to import it qualified. It's not strictly necessary for the code that follows, though.

The equivalent of clojure.core is the Prelude. We hide Left and Right because we'll be using our own concept using those identifiers. We hide foldr because the version in Data.Foldable is more general.

import Preludehiding (Left, Right, foldr)

The Haskell Prelude is actually kind of frustrating, in that it doesn't show off the language in its full power. It's heading in that direction, though. In particular, this particular problem is getting addressed soon. Some people opt out of the Prelude altogether and use an alternative.

Basic Data Types

We're writing Haskell, so we should write some types down.

You'll recognize the following declarations as being identical to Java enums. Ord means it's orderable, which in turn means you can put it in a set (hash sets aren't the default in Haskell), Eq means you can test for equality using it, which comes along for the ride with Ord, Show means you can print it. Haskell magically writes the code in deriving.

We'll represent everything using only the representation of the right hand side. This has the nice property that the initial state is the empty set. So we're travelling from the Left to the Right. If we'd used a list, some of the code below would be prettier than using a set, but I believe set is the correct representation since it's fundamentally unordered. It's worth considering how it would look in Clojure with Set.

This is a newtype. type would indicate a type alias (so State was exactly the same thing as S.Set Item.) A newtype can't be mixed with a raw set (which is what a Clojure programmer would naturally do) and requires you to explicitly construct and deconstruct it from the set as necessary. This obviously has a cost in verbosity, but has no runtime overhead because it's all optimised out. It's especially useful if you're dealing with two concepts wih the same type representation. In our case, State and History (defined later) could be very easily confused in Clojure.

newtypeState=State (S.SetItem) deriving (Ord, Eq, Show)

State of Play

We'll need some way of mapping booleans to Left/Right. We're adopting a convention that Left = True here, and we've named the function to help keep this straight. Note that we have two definitions. Each definition is a pattern match on the right hand side. Basically, you need this for two things: identifying the side you're on, and the side you're not on, so the Bool -> Side mapping makes sense.

toRight ::Bool->Side
toRight True=Right
toRight False=Left

Now let's figure out which side we're on. Here we destructure State for the first time.

onRight ::State->Bool
onRight (State s) = S.member Me s

We also need a function that tells you what is on which side.

\\ means "difference". Since Data.Set is namespace qualified, so is the operator.

Sadly there's no general type that subsumes sets and lists so there's a List.\\ and a Set.\\ and they don't interoperate well

Coming up with a good type system for lists and list like things is regarded as an open problem in the Haskell world and they're not prepared to take the kinds of compromises Clojure and Scala have made. (Consider, for instance, that mapping a set returns a list.) However, in practice that means that using different types of lists together or writing general list-like code is a pain I could have introduced my own abstraction, but seriously, what's the point?

Again, we have two definitions. This is the first time we use a where clause. A where clause is similar to a postfix let clause. Note that we don't need type declarations for non-top-level declarations.

Also, this is an arity-2 function. Only there's no such thing in Haskell. Haskell, like most FP languages (and unlike Clojure) only ever has functions that take one parameter and return one. So what you're really looking at here is a function that takes a Side and returns another function which takes a State that then returns a set of items. If you just don't apply enough parameters, you get the partial application of the function. I've long since been an advocate of programming Clojure like this ever since I spent a couple of hours in F#'s company.

The whole reason we've defined the operations above is this: after this point we'll never destructure State again, just interact with the State functions we've already defined. The hope is that this enables us to think at a higher level about what we're doing. (I'm not going to argue this point, but there's plenty of people on the internet prepared to do so.)

Haskell! So We Can Be Safe!

Let's figure out if a State is safe. Turns out the rules for whether or not you're safe are pretty easy

In practice, the side function is only used within safe so we could have just stuck it into the where clause and saved some newtype book-keeping.

Moving the Boat

I'm not 100% happy with the readability of this next function, mostly because it's quite long. Suggestions are welcome.

We need to find the next possible states. We're mapping to set, because there's no inherent ordering of the future states. You can do the same in Clojure. Unlike clojure, we need a separate map function, S.map rather than map. The good news is that it returns a set rather than a lazy list.

There is a general map function, fmap that will map anything to its correct container type (and more!) but we can't use fmap here for technical reasons (for the curious, lookup: "Set is not a Functor in Haskell").

Also, note that this is where we finally actually create a new State, and that we can just use State, the constructor, as a straight function that we can map over.

The move command is either a delete or an insert, depending on the direction of travel. In Clojure this would be (if onRight dissoc assoc)

move =if onRight
then S.delete
else S.insert

The list of items is the things that are on your side that aren't you.

items = S.delete Me mySide

Effectively, this next line just destructures State.

right = side Right state

Whatever else happens, you're definitely moving youself Note that moveBoat is the State represented by just moving yourself.

moveBoat = move Me right

If you choose to move an item, it's a motion on top of moveBoat, not on top of s, since you're also moving.

We're using flip, which swaps the parameters of move. We could also have said moveItem x = move x moveBoat or something with lambdas (IMO, lambdas are rarely the most clear option, and in this code they're never used.) Although you could write flip in Clojure, it really isn't Clojure "style", but definitely is Haskell style

moveItem = flip move moveBoat

carry is the set of states if you carry an item with you

carry = S.map moveItem items

There's a huge number of different types in the preceding function, and no type declarations other than the top level. You can put more type declarations in that I have, but you can't put in fewer and compile with -Wall (If you're OK with warnings, you can throw away the top level type declarations some of the time, but there's a lot of reasons that's a bad idea.)

Desperately Seeking Solution

We'll ignore the State type completely for a while and just talk in general about how you solve this kind of problem.

We need to think about how to represent a sequence of moves. Here we newtype List (List here is a good choice of type, since history is fundamentally ordered). History is stored backwards for convenience.

newtypeHistory a =History [a] deriving (Eq, Ord)

Let's make history print the right way around though. To do this, we need to implement the Show typeclass by hand. (Typeclasses are a bit like interfaces, but behave very differently.)

=> is the first example in the code of a type restriction. Here we're saying "If a is showable, then History of a is showable." Then the implementation says "The way you show it is by taking the list, reversing it and then showing that."

How are we're going to find the solution? You want to use your transition function to construct a set of possible histories and then search that list for a solution. You could basically do this as a breadth-first or a depth-first search. A breadth-first search will have the advantage of finding the minimal solution. To avoid wasting time on cycles such as the boat just going backwards and forwards, we'll keep track of what positions we've already generated.

So, how to we go from all combinations of 2 moves to all combinations of 3 moves? We define a data structure, Generation.

In practice, we know that a will be State, but it's generally good Haskell style to use the most general type possible. When you get the hang of it, this aids clarity, rather than impeding it. (See also: parametricity and theorems for free).

Generation is a record data type. Like Clojure, you can use previous and states as accessor functions. Unlike Clojure, these functions are strongly typed. That means you can't have fields with the same name in different records (within the same file/namespace).

Working with Generations would be better if we used lenses, but lets stick to things in the base libraries.

Implementing the Search

We need to map the function that generates new states to a function that creates new Generations. In Clojure, we'd probably use reduce. In Haskell, we use foldr, which is pretty similar, modulo some laziness and argument order differences.

(a -> S.Set a) is a parameter that is a function.

We're specifying that a implements Ord, which we need to be able to put it into a Set.

Due to the wonders of partial application, (a -> S.Set a) -> Generation a -> Generation a is exactly the same as (a -> S.Set a) -> (Generation a -> Generation a)

Actually, I've skipped the most important bit of this: the step function. I could have inlined it, but it's pretty complex I prefer to give it its own top level declaration, along with a semi-scary type signature.

The destructuring of History is a bit more complicated. Here we're assigning h to the whole history, and s to the latest state in the history. Note that if History is empty, the pattern match won't work. Clojure would just match it and put nil in s. Type safety is pretty cool here but it means we need a new pattern match for empty histories. Strictly speaking, they aren't valid, but the way we defined the type they can happen. (If you're seriously thinking you want a type system that can express "non-empty list" I have two answers for you: core.typed and Idris.) This is the point at which Haskell goes "Well, I'm trying to be a practical FP language, you know."

where result =Generation {

Add the new states into the list of known states.

previous = S.union (previous t) nextStates,

Add the new histories into the current generation.

states = S.union (states t) (S.map newHistory nextStates)
}

The next states are the states of the transition function minus the known states.

nextStates = f s S.\\ (previous t)

The newHistory function is interesting. Observe (: h). Now (x : xs) is the same as (cons x xs) in Clojure. (x :) would be (partial cons x) and (: xs) would be #(cons % xs). So (: h) is a function that takes a t and puts it in front of the existing list. This is operator section and works for all operators (you can define your own) except (- x) (which is special cased to unary minus).

Again, History is just an ordinary function, that wouldn't have been needed if we'd done types instead of newtypes.

newHistory =History. (: h)

Finally, to avoid compiler warnings, tell it what happens when History is empty. This case should never happen.

stepG _ _ t =Generation { previous = previous t, states = S.empty }

The Under-Appreciated Unfold

So, now we've got a Generation to Generation function, how do we get the list of all possible histories? Well, we could always just write some recursive code, but like in Clojure, there's functions that exemplify common recursion structures. In Clojure, iterate might be good choice here. In Haskell, there's unfoldr.

The type declaration of iterate in Clojure would be iterate :: (a -> a) -> a -> [a].

You might be wondering why they're so different. The short answer is that unfoldr is awesome. The key is the step function itself b -> Maybe (a,b). This says that it takes a b and returns either Nothing (nil) or Just a pair of a and b. (Did I mention one of the coolest things about Haskell? null/nil doesn't exist.) The b gets passed to the next step, the a gets output. So unfoldr supports having an internal state and an external state. What happens if Nothing is returned? The list stops generating. Clojure expects you to then terminate the list in a separate step, an approach that seems simpler but falls down when you start to use things like the state monad.

So, our output a is going to be the set of states of the generation, while b is going to be the Generations themselves. We'll return Nothing when there's no states in the Generation.

So we just call unfoldr with a generation producing function using forUnfoldr to adapt it to fit.

We've done this using unfoldr, which has explicit state. Control.Monad.Loops exposes unfoldM which could be used with a state monad to achieve a similar effect.

Fun with Types

Let's have some fun. We've got a list of sets that contains the solution. There's a perfectly good function for finding an element in a a list called find (as an aside: there's no such perfectly reasonable function in Clojure). Small catch: it takes a Foldable (in Clojure, a reducable). List is Foldable, Set is Foldable, but a list of sets of states iterates through the sets, not the states.

We'll do some type magic and make it iterate through the states. (Thanks to Tony Morris for pointing me to a way to achieve this. Much more brain-bending stuff is available in Control.Compose)

So, here we've said that a foldable of a foldable of a can be used as a single foldable by using flip $ foldr f as the step function. We could have just written this function out, but hey, why not live a litte.

The Finish Line

Finally, we get to main. Often this is expressed in do notation, but I don't feel the need here, since it's literally one line: print solution.

So, you can build it, and run it. time reports that it takes 2ms on my machine. How on earth did it run so fast? Aren't fully lazy functional languages meant to be slow? Well, there are advantages to running an optimizing compiler, but they're helped by understanding a bit of what is going on under the hood. An unfold followed by a fold is called a hylomorphism. The thing is, you never need to build the whole structure, you could just run each iteration through the fold as it comes. The Haskell compiler is smart enough that it actually rewrites the code. So a large chunk of our code is actually running imperatively.

How much have types helped me write this code? Well, the early functions, especially safe, I needed to nail in GHCi, the Haskell REPL. On the other hand, the later parts of the code actually worked first time (after I'd managed to fix all of the type errors.). Make of that what you will.

I hope you've found this interesting. I'm still very much a beginner Haskell programmer, but I hope the presentation enables you to see how you can express ideas in Haskell. If you'd like to learn more, I can highly recommend starting with Brent Yorgey's course.

One of the biggest lies we tell starting developers is that design patterns are language independent. Whilst true at a high level, the truth is that a programmer in a modern programming language can junk most of the Gang of Four book. A couple of days ago, it was twenty years old. It's time to celebrate its lasting positive influences, and then bury it.

Some things are potentially useful as terminology for discussing with people, but others aren't even useful as that. The really obvious example is the template pattern: if you're programming in a language that can use functions as values it's utterly meaningless. Another is iterator: most programming languages have a list/sequence implementation and you just use that.

Prototype, equally is meaningless for two, entirely opposite, reasons: first, the whole concept originates in C++ where you can perform a raw memory copy. In a language such as Java that doesn't have one it's so cumbersome you'll prefer a factory method. In a language such as F# or Clojure, ubiquitous persistence data structures mean that everything's a prototype.

Command is basically a pattern that replaces functions with objects. In a functional programming language, this is just the normal way you do things. In languages such as Python and Clojure where objects can act as functions the line is further blurred. But that's nothing compared to what you can do with Clojure's multimethods.

Multimethods and Protocols

Quite a few patterns are just workarounds for the painfully restricted dispatch patterns in old OO languages. The visitor and adapter patterns are both ways of circumventing the closed nature of classes in C++/Java. When you can just associate new methods with existing data structures, even third party code, you just don't need them.

Also, if you understand multimethods for more than just class based dispatch, you see that it subsumes the state pattern.

In practice, you can use multimethods to mix and match dispatch on raw parameter value (state), dispatch on computed value (strategy) and dispatch on class (visitor). Similar effects can be achieved using Haskell's type features.

Trivial

Then there's stuff that's just a special case of something more general. Chain of responsibilty in Clojure is easily implemented using the some function:

Is chain of responsibility really useful terminology here, or is it just "using the some function"?

Then there's ones that are just plain outdated: observer and mediator are rarely a better choice than a decent pub/sub mechanism. Heck, even your language's event system is often a better choice. And I think everyone's got the message about singleton by now.

Outdated

I'm concerned this will be seen as down on the whole concept of patterns. Actually, high level patterns, the kind that Martin Fowler talks about are fine and last a long time. But our understanding of patterns constantly evolves (see pub/sub) and the ergonomics of specific patterns varies wildly between languages. GoF was a great book, and made a huge positive impact, but it's time to take it off our shelves.

For me, Rich Hickey's original post on transducers raised more questions than it answered. Stian Eikeland wrote a good guide on how to use them, but it didn't really answer the questions I had. However, there's an early release of Clojure 1.7, so I thought I'd take a look.

Let's start with a simple example using an existing transducer:

(def z [1 2 3 4 5 6])
(sequence (filter odd?) z)
;;; (1 3 5)

Okay, so far so good, we understand how to use an existing transducer to create a sequence.

Now, is identity a transducer?

(sequence identity z)
;;; (1 2 3 4 5 6)

Perfect. Now let's try doing it ourselves. We'll write a transducer that preserves all its input.

Arity Island

Rich says the type of a transducer is (x->b->x)->(x->a->x). In practice, arity matters in Clojure, so it's really (x->b-x)->(x,a)->x. So let's write my-identity

Well, that's a bit of a mess, but we can see the 5, 7 and 9 streaming out. Weirdly, they seem to be coming out slightly too late. And the arity-1 function is called at the end. It's not clear what you can usefully do with it's parameter other than pass it through since it's not fixed, has no guaranteed protocols and in the case of LazyTransformer, blows up if you try to evaluate it.

If you take a look at actual transducers, you'll see there's a third, zero-arity function declared as well. I haven't discovered what that's for yet.

State of Play

So what's that arity-1 function for, then? Well, the doc string for drop gives us a clanger of a clue:

Returns a stateful transducer when no collection is provided.

Transducers can have state. They start when the yield function is passed them, and finish when the arity-1 function is called, and you can clean up resources when it ends. This start/reduce/finish lifecycle is actually vital to making drop and other reducers work.

OK, this is starting to look an awful lot like the IObserver interface in C#. (The Subcribe method corresponds to the initial start step.) That suggests the arity zero function is for some form of error handling, but I haven't managed to trigger it.

Functors are all very well, but they only allow you to map with a function that takes only one parameter. But there's plenty of functions that take more than one parameter, including useful ones like add and multiply. So how do we want to multiply to work on nullable integers?

2 times 3 should be 6

2 times null should be null

null times 3 should be null

null times null should be null

There's something else we need to do. What if 2 is just an integer, not a nullable integer? Really, we need to be able to promote an integer to a nullable integer. The more parameters a function has, the more likely one of them isn't in exactly the right format. Haskell calls this function pure. (+)

Now let's get a bit more complicated. What about multiplying two lists together? Multiplying [2] and [3] should obviously give [6]. But what happens if you're multiplying [2,3] and [5,7]? Turns out there's at least three sensible answers:

Multiply the pairs in sequence: [10,21]

Multiply the pairs like a cross join: [10,14,15,21]

Actually, you could also iterate the first sequence first [10,15,14,21

More than one way to skin a list

Let's just concentrate on the first two. How are they going to deal with lists of different length?

[2] * [1,3] should be [2] OR

[2] * [1,3] should be [2,6]

But what if the first parameter isn't a list. What should that look like? Well, 2 * [1,3] should definitely be [2,6]. But that means that, depending on how we generalise multiplication, we also need to generalise turning a number into a list.

To multiply like a cross join, 2 can just become [2]

To multiply the pairs in sequence 2 needs to be [2,2,2,2,2,...], an infinite sequence of 2s.

So, generalizing multiple-arity functions to functor contexts isn't as obvious as it is for single-arity functions. What on earth do we do about this? Well, the approach Haskell goes with is "pick an answer and stick with it". In particular, for most purposes, it picks the cross join. But if you want the other behaviour, you just wrap the list in a type called ZipList and then ZipLists do the pairwise behaviour.

Back to the Functor

So, how should we handle the various examples of functors that we covered in the first part? We've already dealt with nullables and lists and sets are a dead loss because of language limitations.

Multiplying two 1d6 distributions just gives you the distribution given by rolling two dice and multiplying the result. Promoting a value e.g. 3 to a random number is just a distribution that has a 100% chance of being 3.

You can multiply two functions returning integer values by creating a function that plugs its input into both functions and then returns the product of the results. You can promote the value 3 to a function that ignores its input and returns 3.

How about records in general? Well, here's the thing: you can't promote a record without having a default value for every field. And that isn't possible in general. So, while you can undoubtedly make some specific datastructures into applicatives, you can't even turn the abstract pair (a,b) (where you're mapping over a) into an applicative without knowing something about b.

We could make the mapping work for pair if we were actually supplied with a value. But that doesn't make sense, does it? How about, instead of (a,b) we work on functions b -> (a,b). Now we can map a, on single and multiple-arity functions, and just leave the b input and output values well alone. It turns out this concept is rather useful: it's usually called the State Monad.

Would you like Curry with your Applicative?

Up until now, I've mostly talked about pairwise functions on integers. It's pretty obvious how you'd generalize the argument to arbitrary tuples of arbitrary input times. However, it turns out that the formulation I've used isn't really that useful for actual coding, partly because constructing the tuples is a real mess. So let's look at it a different way.

Let's go back to multiplying integers. You can use the normal fmap mapping on the first parameter to get partially applied functions. So our [2,3] * [5,7] example gives up [2*,3*] and [5,7]. Now we just need a way of "applying" the functions in the list. We'll call that <*>. It needs to do the same thing as before and the promotion function, pure is unchanged.

It turns out that once you've got that, further applications just need you to do <*> again, so if you've got a function f and you'd normally write f a b c to call it, you can instead write

f <$> a <*> pure b <*> c

Assuming a and c are already in the correct type and b isn't. This is equivalent to

pure f <*> a <*> pure b <*> c

but in practice people tend to write the dollar-star-star form. Finally, you can also write

(liftA3 f) a (pure b) c

which is much more useful when you're going pointfree.

And finally...

So, here's the quick version:

a functor that can "lift" functions with multiple parameters is termed an "applicative functor", "idiom" or just "applicative"

a functor is uniquely defined by the data type you're mapping to(*)

some data structures like list, however, give rise to multiple possible implementations of Applicative

Functors have been well understood for a long time, and monads provided the big conceptual breakthrough that made Haskell a "useful" language. The appreciative of applicative functors as an abstraction that occupies a power level between the two is a more recent development. When going around the Haskell libraries you'll often discover two versions of a function, one of which is designed for applicatives and one for monads but they're the same function. It's just that the monad version was implemented first. With time, the monad versions will be phased out, but it's going to take a long tuime. You can read more about the progress of this on the Haskell wiki.

What does a functor actually look like in various programming languages? We already said it's something you can use to map, so let's take a look at some language's mapping functions:

Clojure has map, mapv and fnil

Haskell has map and fmap. (It also has <$>, which is the same as fmap)

C# has Select

Java's streams library has map and mapToInt

So, are any of these functor mapping operations? Well, it won't take a genius to guess that fmap does the right thing. map in Haskell does the same thing, but only works for Lists. The definition of fmap for list is map, so it's pretty much a wash. (*)

Land of Compromises

The others? Well, kind of, but you tend to need to squint a bit. The problem is that if you map over an identity function, (e.g. x => x in C#) you should get the same type back as you put in. And actually, that's very rarely true. map in Clojure can be called on a vector and will return a lazy list. mapv can be called on a lazy list and get back a vector. map in Java and Select in C# send an interface type to the same interface type, but rarely return the exact same type as you were expecting.

Moreover, there isn't a general mapping interface that lots of functor-like things implement and there isn't any way to make one. This isn't a problem for list comprehension, but it horribly breaks functors as a general model of computation. (#) You can still use the concepts, but you'll end up with a lot of code duplication. Indeed, you'll probably already have this code duplication and be thinking of it as a pattern. As is all to often the case, low level programming patterns reveal deficiencies in your programming language.

There's good reasons for these mapping functions not behaving exactly like a functor, though: performance. The haskell compiler has everything, not just lists, as lazy and can optimize types away. Clojure, C# and Java can't and treat them as hard optimization boundaries.

Haskell ain't perfect either

We already established in the previous article that there are plenty of functors that have nothing to do with types. Haskell's Functor type class is therefore only a functor on the category of Haskell types (usually referred to as Hask). This seems good enough, but actually it isn't.

Consider a set of values. You can easily define a mapping function that satisfies the functor rules. Sadly, Set in Haskell isn't a Haskell Functor. This is because Set imposes a condition on its values that they be sortable. Whilst this isn't a problem for real Functor's, it's a problem for Haskell functors because type classes don't admit restrictions on their type parameters. To put it another way, Functor in Haskell is a functor over the whole of Hask, never a subcategory. For that matter, you can't do the (*2) functor that I described last time in any sensible way because you can't restrict its action to integers.

It turns out this problem is fixable, with Rank-2 typeclasses, but don't hold your breath for that to land in Prelude any time soon. In the meantime, you can't use Functor to represent functors with domain type restrictions.

(*) Many smart Haskellers believe (and I agree with them(+)) map should work on all functors and fmap should be retired. There's a general theme here: the standard libraries are showing their age and need work to simplify them.

(+) If you've never seen Alan Rickman play Obadiah Slope, you're missing out.

(#) If you're prepared to lose information, all you really need is reduce/Foldable anyway.

Functors : Category Theory Stuff

If you ever want to talk the same language as smart Haskellers, you need to know a bit of category theory. Here's some notes on how I understand category theory right now.

The first thing to to appreciate is a list isn't a functor, "list" is a functor. In particular, it's a mapping from one type to another e.g. int to list of int. Furthermore, it's a mapping that preserves the structure of int, in that performing "map" works.

Considered this way, there's no such thing as a "higher order type", there's just functions from one type to another. Types with more than one type parameter in Java/C# are just multiple arity functions on types.

Some other things that are worth considering: you can make a list of any type, even a list. Not only that, but if a and b are different types, list of a and list of b are different as well. So, in maths terms, it's an injection of the type space onto a subsection of the same type space.

What the heck is a category?

Now, let's go back to the start and talk terminology. A category is a bunch of "objects" and "arrows" between them. They behave basically like values and functions. Indeed, values and functions form a category. The only real requirement is that arrows compose like functions and that there's an identity map that does nothing.

In the context of type theory, the objects are the types themselves. The arrows are higher order type constructors. Just like normal functions, they're not reversible. Now let's make a bit weirder. Just the lists and the functions between lists and other lists form a category too.

The next bit may or may not make sense if you don't have a maths background. Mathematically, a functor isn't anything to do with types at all, it's just a mapping between one category and another that preserves some structure.

Wait what?

Let's think of a really simple category. Let's have the objects be integers and the arrows be rotations of integers e.g. add three, subtract two. And "add zero" is an identity map.

Now let's have another one which is the same, only all of the numbers and rotations are even. Then "times two" maps objects and functions between the two categories. So 3 becomes 6 and "add 3" maps to "add 6". And finally, "add zero" becomes... "add zero". So "times two" is a perfectly valid functor that is absolutely nothing to do type theory at all.

Finally, a small note, if you're just looking at category theory for the purposes of understanding Haskell you'll come across the phrase "locally small" a lot. Every last category you are ever going to worry about is locally small, so don't sweat it.