One of Haskell's distinguishing (and perhaps unique) language features is called "type classes", which can be especially confusing because the word "class" has a strong meaning to most of us already. So here's a description of what they are, and why I think they're interesting.

One simple (perhaps overly simplistic) way to think about type classes is as a different name for an interface (in the OO sense). Imagine a function sum that takes a list of things to sum as its argument. We'd like for it to work for lists of integers as well as lists of floats. But we do have a restriction: the list items need to be able to add; it doesn't make sense to pass a list of hash tables.

The solution is to define a type class as a collection of types that provide an interface. (This is the confusing part, because these terms resemble OO terms but they're used in a slightly different way.) Haskell has a standard type class called Num (short for numeric) that declares operations like +, and then declares a bunch of types (Integer, Float*) as instances of the type class.

To demonstrate these terms again: "Integer, a type, is an instance of Num, a type class." And then our sum function declares that it works over any list of type a as long as a is an instance of Num:sum :: Num a ⇒ [a] → aThe stuff to the left of the double arrow declares the restrictions: "a must be an instance of Num". The stuff on the right is the type: "takes a list of as, gives back a single a".

Up to this point this is more or less standard OO fare with some ideas renamed. But note that a type class is not a type: we didn't declare sum as taking a list of Nums, but rather that it takes a list of as that are instances of the Num type class. Also note that these are simpler than true OO inheritance: types can be instances of type classes, but there's no deeper hierarchy tree; there's just one level.

type classes as viewsOne of the key differences from OO inheritance is that a type class and instances can be declared entirely separately from the declaration of the original type. This allows types to be used in unanticipated ways.

And if you decide you need a new interface, you're stuck unless you want to modify the class. This is solved in Ruby by their "open classes", which let you add methods to a class whenever you like, but even Rubyists try to limit their use of that: if library A decides that Strings need a serialize method, and library B needs a different one, you're screwed -- adding methods to a globally shared class is exactly counter to the goal of separate and independent software components.

With a type class, two libraries can each create their own different Serializable classes and implement them for different subsets of types in different ways. This is another way to think about a type class: it provides a view on to a collection of data types. For example, (section 2.2 of the paper) the OO "adaptor" problem can be neatly solved with a type class, and doesn't require creating an intermediary object.

As a more concrete example, Haskell has a range operator for constructing lists: [1..10] is the list of numbers from 1 to 10. This is syntactic sugar for a call to the Enum(erable) class, which means that I can add e.g. my own Color type to Enum and suddenly code like [Red..Blue] will work.

Another example is how QuickCheck (I described it here) needs to be able to generate random inputs to test a function, so it has a random-integer generator, and a random-list generator, etc. But what if it tests a function that works with colors? I can make Color an instance of their generator type class, and everything just works.

type classes ignore hierarchiesBecause the class is separate from its types, functions in type classes can cut across many different class hierarchies. For example, the Functor type class contains all types that can be "mapped" over: I can square each element in a list and get a new list, and I can similarly square each element in a tree and get a new tree. So we can make both lists and trees instances of the Functor class and name the operation fmap, and the operation of squaring each element held by any mappable type can be written fmap (**2). (For which, again, the squaring works for all numeric types.)

Another good example is that Monad is a type class, covering types that obey the monad laws. This allows the existing library of functions that work with monads to work across many different behaviors.

So whenever a library has a new way of looking at a collection of types, picking up some common thread between them -- then that can be captured in a type class. Perhaps a library can work with any queuing data structure that allows retrieving the "next" element: then it can declare a Queue class (which currently doesn't exist in Haskell), add lists and heaps to it, and still allow users to provide their own.

One way to think about this that is actually surprisingly applicable is the current "web 2.0" fashion of tags versus hierarchies. In section 2.3 of the paper they discuss what they call "the tyranny of the dominant decomposition", which is that sometimes there isn't just one apparent class hierarchy, and describe how different sets of type classes allow different views on the same types.

Yet you can still create a hierarchy of type classes, much in the way you can create hierarchies with tags. One of the simplest type classes (and historically the original motivation for type classes) is Eq, supporting equality. Then the Ord class (supporting orderings) can use Eq, requiring its types to support both equality and a less-than operator.

finally, type classes are inferredOne of the primary reasons Haskell/ML can make static typing pleasant is that type classes are inferred. If I declare my sum function:add x y = x + yHaskell can recognize (because I used "+") that add can only accept addable things: Num. In fact, type inference is designed such that the most general type is always inferred, which means that even if I wasn't thinking about complex numbers when I defined add, it still works for complex numbers while still verifying at compile time that I can't pass in two file handles.

Because of this inference, there is actually a complicated set of number-related type classes in Haskell that I'm not even aware of. For example, non-integer division only works for Fractionals, which includes both Rationals (represented as numerator / denominator) as well as Floats, while a separate Integral class allows a modulus operation.

I didn't know most of that before now, when I just looked it up -- from the programmer's perspective, I just get notified at compile-time if I'm mixing up floating-point division with integer modulus, or if I try to use a file as a key in a hash table.

simple+fundamental = useful (as well as crazy hacks)Part of the key to all of this (and why my previous post was warm-up) is inverting obj.method(args) intto function(obj, args), because in the former case the set of functions is constrained by the object while in the latter case you can add functions willy-nilly. (And you can have different sets of functions available through different imports, like my serialize example.)

That aside, many of these features I described are implementable with dynamic languages, so what's interesting here (I think) is that Haskell has made these sorts of design patterns statically verifiable while still providing power beyond what we're used to. On the other side, this sort of functionality is also provided to some extent in C++ via templates and static overloading but C++ templates have all sorts of strange limitations because they implement a lot of this through (effectively) textual substitution.

It's not all golden, of course -- I still haven't described a way to match dynamic dispatch -- and the paper outlines different limitations in a number of ways. But it turns out that type classes do cover a variety of use cases in an interesting way that is distinct from the common OO patterns.

It turns out that type classes, again like C++ templates, are one of those basic-enough ideas with enough power that they can be used in all sorts of crazy ways, so consider this post as only scratching the surface. For example, see this hack where he manages to use type classes to symbolically differentiate numerical code. Since you can also return through a type class, functions can vary their behavior depending on their return type, which produces all sorts of weird behaviors. See for example the end of this post where the behavior of some code can seemingly vary depending on how the values are used.

Some future reading: my previous post on my type classes for HDBus, and especially the paper I've kept mentioning, which starts from the basics and gives a lot more examples. Additionally there's UnderestimatedTypeClasses on the Haskell wiki, which in part claims: "There are many misconceptions of the expressiveness of type classes, usually doing them a disservice, and there are many analogies which, at best, only cover a facet of them and at worst are outright misleading. For example, type classes are not analogous to: OO classes, Java interfaces, or CommonLisp generic functions. The only thing that comes close is the fuzzy idea of Concepts in modern (STL-style) C++." Hmm. Hopefully my presentation here hasn't been too much of a disservice.

* This is actually a lie -- there's a more complicated set of classes. But it gets the gist right, anyway.

...so I've only skimmed this thus far, but does this sound right for the basic use of type classes? Type classes create a (first-class?) domain restriction on top of traditional subtyping based on the names of the types...somewhat similarly to how subtyping via inheritence in Java/C++ is done by explicity naming the relation between named types?

I think you're on the right track. Mentioning subtypes may confuse things -- Haskell has no subtyping, only type classes. So, to edit your sentence, "Type classes create a domain restriction on polymorphism based on the names of the types."

I can have normal polymorphism, with its implied forall:length :: forall a. [a] → aand type classes let me restrict the polymorphism to a set of types X:restricted_length :: forall a. X a ⇒ [a] → a

As for your questions about names, I'm still a bit fuzzy on the answer. Like in C++, a subclass mentions its parent by name, but I wouldn't say it's based on the names of the types; instead, when it refers to its parent (class derived : public base;), the link is between "derived" and "base" directly, no matter what other things are named "base" later.

But yeah, in the way C++ subtyping is based on names, this is based on names as well. The type class and the types all must be named. (This is for example in contrast to O'Caml's inheritance, which is structural and not nominal, if you remember those terms...)

I guess the real answer may be a question of the implementation? I have no idea how I can off in my own file define myself as an instance of a type class and have code compiled elsewhere still do the right thing.