Generic programming (GP) is an increasingly important trend in
programming languages. Well-known GP mechanisms, such as type classes
and the C++0x concepts proposal, usually combine two features: 1) a
special type of interfaces; and 2) implicit instantiation of
implementations of those interfaces.

Scala implicits are a GP language mechanism, inspired by type classes,
that break with the tradition of coupling implicit instantiation with
a special type of interface. Instead, implicits provide only implicit
instantiation, which is generalized to work for any types. Scala
implicits turn out to be quite powerful and useful to address many
limitations that show up in other GP mechanisms.

This paper synthesizes the key ideas of implicits formally in a
minimal and general core calculus called the implicit calculus (\lambda_?),
and it shows how to build source languages supporting implicit
instantiation on top of it. A novelty of the calculus is its support
for partial resolution and higher-order rules (a feature that has been
proposed before, but was never formalized or implemented). Ultimately,
the implicit calculus provides a formal model of implicits, which can
be used by language designers to study and inform implementations of
similar mechanisms in their own languages.

Interesting work! However, the two things listed in the paper as things Haskell can't do are I think inaccurate (at least with regards to modern ghc-flavored Haskell). We can abstract over typeclasses with constraint kinds these days, although they're not terribly well documented. And although it is ugly, we can use helper classes to pull off "exotic" show instances. This paste shows an implementation of show for Perfect: http://lpaste.net/106259

The "real" ShowF class is Show1 in Edward Kmett's prelude-extras, and the trick is used both in his bound package and in the current version of transformers (which in turn means it is used in the mtl).

A deeper concern about an implicit approach I have is that for some purposes a definitive global instance is the "right" thing. Consider maps implemented as sorted trees. We are able in the Haskell collections library to use an efficient hedge-union algorithm. However, if we do not "know" that both maps were constructed with the same ordering, then we must fall back to a less efficient approach, as is the case in the Scala libraries. I do not think this issue is insoluble, but it certainly requires future work if ad-hoc instance selection is ever to be considered an unqualified win.

and just as a followup, this paste (by dan doel) shows a version that uses the constraint kind machinery, and demonstrates more generally how haskell can abstract over constraints these days: http://lpaste.net/7734006910520655872

That said, the motivation section mentions 3 major concerns, so I figured I'd go through them in order.

1.) Global scoping.

Global scoping is also addressable within the confines of the existing type class mechanism by using the reflection library -- without losing coherence.

Consider the following, minimal API:

reify :: a -> (forall s. Reifies s a => Proxy s -> r) -> r

reflect :: Reifies s a => p s -> a

provided by

http://hackage.haskell.org/package/reflection

based on an older functional pearl by Oleg Kiselyov and Chung-Chieh Shan with a more efficient implementation under the hood.

Austin did a nice writeup on how I use it to create dictionaries for classes out of whatever you have lying around in lexical scope.

https://www.fpcomplete.com/user/thoughtpolice/using-reflection

Notice, however, that you don't give up coherence of instance resolution with reflection! It is guarded by quantification much the same way that runST saves you from yourself when you want to use a little bit of mutable state the quantifier in reifies lets you make up an instance .. for an instance that couldn't otherwise exist, and so we get the power of implicits, the ability to create new dictionaries out of whole cloth, but retain the fact that given two Set Int's i can know that it is correct for me to move the use of the constraint to the use site rather than have to hold onto the one I used to create the object in the first place.

"In this type class, the intention is that the ctx variable abstracts over a concrete typeclass. Unfortunately, Haskell does not support type class abstraction."

Since, well, it now does! =)

3.)

Higher order rules can be handled by other means through the use of the simpler Show1 hack shown above, or by the more exotic hackery using the Constraint kind. I have code on hackage that demonstrates the composition of constraints, that constraints form a cartesian category.

With that you can reflect the ability to walk up to superclass constraints or down to instance heads and capture 'instances with not-yet-satisfied preconditions' as morphisms for later use.

So it would appear that all three major issues that the paper attempts to redress have er.. already been redressed in a manner that doesn't require the messy reasoning that implicits do.

I really should stop there. ;)

But to get a sense of how deep the rabbit hole goes with a little bit of GHC magic similar to what is used by reflection under the hood you can upgrade them into a full cartesian closed category, where you can take limits, and use this to define away a ton of boilerplate.

Nice paper. Very clear presentation of the ideas. There is a typo on page 19 (simultaenously, should be simultaneously).

I had a bit of a think about adding higher order constraint solving to GHC, but I found it tricky to fit into the existing OutsideIn(X) solving framework. Maybe this paper will give me the missing intellectual pieces I need to finish this off.

Great comments so far! I do think we need to update our S1.2 to better reflect what we can do in modern day (GHC) Haskell. In particular I do agree that the new ConstraintKinds extension does provide native support for class abstraction.

I would still argue that the existing class system does not really support local scoping or higher-order rules. We can somehow encode such features (which is worth mentioning), but then again we can also encode these features by using the dictionary-passing style directly.

In any case it seems that there is demand for all such features (otherwise people would't have bothered developing extensions and/or encodings). So studying and formalizing a system supporting these features from a more theoretical point-of-view is worthwhile. That's what we try to do in this paper :).

I don't see how it is "wrong" to note that people can and do pass different dictionaries and get different results and that this can matter.

I've literally wasted weeks chasing down issues caused by "local" scoping on implicits changing the instance in scope, because someone was being clever.

Uniqueness of a typeclass given an instance is the only tool that we have for reasoning about value equality at the type level.

I'm loathe to lose that for the "power" of implicits, especially since we can already re-derive the power of implicits through reflection without giving up the coherence and confluence of instance resolution.

The "don't do that" apology is a pretty standard defense of implicits.

In Haskell we move instances to the use sites.

The correctness of that transformation is predicated on the coherence of instance resolution. You'll get the same instance in either place, so the result of the code doesn't change. It can't care where the instance comes from.

In a language where you lack coherent instance resolution you now have to choose to either not do that and therefore have an explosion of different object types because of this extra crap you had to move into them at object creation. You have to largely give up things like monad transformers, because there isn't a place to hold onto all that extra crap, give up hedge unions, or you have to choose to enter into an extra hidden contract with the user of your code that shows up nowhere in any type signatures and must be conveyed by other means.

I choose to require fewer invariants to be maintained in secret.

We derive many benefits from this. Free theorems get much stronger. If you aren't passed back the dictionary where you go to do something you can't use it. Your ability to reason about the code gets stronger.

You are welcome to pretend these coherence conditions exist or that they do not matter in your language with implicits, many do, but there is a cost to be cognizant of when moving from a language with typeclasses to one with implicits.

You may be willing to pay that price. After working with Scala for years, I no longer am.

> You can write exactly the same code in Haskell and it will break equally.

@soc. You can't write that code, because typeclass instances are global. That's the point!

If you do know of a good library in scala that has performant Maps that implement an efficient and safe unionWith, please do provide a link. I'd love to use them. If you don't, but you maintain they can be written, I'd love a constructive proof of this, rather than an unsubstantiated claim.

> kmett: The "don't do that" apology is a pretty> standard defense of implicits.

> sclv: You can't write that code, because typeclass> instances are global. That's the point!

If I wrote code like this in Haskell:

-- only use lists which are in descending order!largestElem [] = -1largestElem xs = head xslargestElem [1,2,3] // OOOPS

Kmett, I'm pretty sure that every Haskell user with more than 3 days of experience will say "that's stupid, don't do that!", and I completely agree with it.

Sclv, you will surely complain about that code and say "but it only breaks, because you are NOT using typeclasses!".

And that's _exactly_ the point.

The stupid Haskell code I wrote above is equivalent to placing the context bound on methods in Scala.

The comment in the Haskell code is as effective at enforcing the intended constraints as adding an implicit parameter to the parameter list in Scala.

> kmett: The correctness of that transformation is > predicated on the coherence of instance > resolution. You'll get the same instance in > either place, so the result of the code > doesn't change. It can't care where the > instance comes from.

Agreed. The way you do that in Scala is binding the typeclass instance to the actual type constructor.

> kmett: In a language where you lack coherent > instance resolution you now have to choose > to either not do that and therefore have an > explosion of different object types because > of this extra crap you had to move into them > at object creation. You have to largely give > up things like monad transformers, because > there isn't a place to hold onto all that > extra crap

Well, in Haskell this is certainly a terrible situation, because the languages doesn't give you much options when you exhaust the (huge!) abstraction capabilities of typeclasses and transformers.

> kmett: give up hedge unions

No.

> kmett: or you have to choose to enter into an extra > hidden contract with the user of your code > that shows up nowhere in any type signatures > and must be conveyed by other means.

That contract is not hidden, it's just exposed via different abstraction mechanisms in Scala.Additionally, as soon as you want to provide different implementations of e. g. Sets which require different constraints, you are figuratively entering hell in Haskell when trying to abstract over them.

In Scala, you just let dynamic dispatch deal with it and be done.

> kmett: You may be willing to pay that price. After working with Scala for years, I no longer am.

I can absolutely sympathize with your frustration!

If I had to use a language for years without having any plan to learn and understand it, I'd certainly would be deeply frustrated, too!

Anyway, I completely agree that having the option to enforce uniqueness for typeclass instances would certainly be a huge help for those developers who want to use context bounds as a way to encode typeclasses in Scala. Maybe it should even be the default, too?

So if you have, say a Set[List[Int]] -- your instance for Ord[Int] has to compare as being the same.

The problem here is we'll often have different references.

Consider the existence of an implcit that upgrades Ord[A] to Ord[List[A]] and one for Ord[Int], we built the 'same' Ord dictionary each time, but it compares as different by referential equality.

So then you have to start overloading the object equals method and hashing to allow for this thing to be compared.

Except now someone is clever and builds a trie by recursively constructing a set out of sets with a fixed point, and your equality comparison bottoms out, so you get clever and you patch it up again not to do that, etc.

This is a rathole you don't have to find yourself in. It also only works because we're doubling down on the bad design feature that java lets you compare any two objects for equality. This becomes particularly ironic when you go to build an Eq class. ;)

> because the languages doesn't give you much options when you exhaust the (huge!) abstraction capabilities of typeclasses and transformers.

When I need to construct a dictionary by hand, I make sure there is an appropriate newtype wrapper around to construct it, and then use the reflection to lift a bunch of values into the type system, distinct from the other types we have and make a dictionary that way. In the process I didn't lose the ability to reason about my code.

> Anyway, I completely agree that having the option to enforce uniqueness for typeclass instances would certainly be a huge help for those developers who want to use context bounds as a way to encode typeclasses in Scala. Maybe it should even be the default, too?

On this much we agree.

> If I had to use a language for years without having any plan to learn and understand it, I'd certainly would be deeply frustrated, too!

I worked for a long time writing Scala as Scala, not just Haskell in Scala, to the point where I was often mocked by the other scalaz folks as going native. However, in the end I decided that it wasn't paying out. The incidental complexity of solutions in scala is very high often obscuring the issue at hand.

However, this thread has degenerated to borderline insulting. It is clear you find value in this abstraction, and I do not,

Implicits are not a free lunch, whether it is a lunch it is worth paying for is definitely a matter of opinion.

I originally intended to participate in this thread to point out that the premise of this paper, that typeclasses give you no mechanism to do 3 very simple things is flawed.

We can and do do them and we don't have to give up the ability to reason about coherence to get there.

So if you are going to pay for implicits, it shouldn't be for those reason, it should be for other perceived benefits on other grounds. You may argue that implicits are easier than the reflection route. You may argue that the newtype noise is too high a tax. These are all arguments that resonate.