Jonathan Aldrich. Selective Open Recursion: A Solution to the Fragile Base Class Problem. Submitted for publication.

We propose to change the semantics of object-oriented dis-patch,
such that all calls to "open" methods are dispatched
dynamically as usual,but calls to "non-open" methods are
dispatched statically if called on the current object this, but
dynamically if called on any other object. By specifying a
method as open,a developer is promising that future versions
of the class will make internal calls to that method in
exactly the same way as the current implementation. Because
internal calls to non-open methods are dispatched statically,
developers can change the way these methods are called
without affecting subclasses.

It may be worth noting that in Ada calls aren't automatically dispatching, and the language provides a mechanism quite similar to the one discussed here.

This is a wicked cool hack, and solves a great many of the problems I have with inheritance. Furthermore, the trick Jonathan invented will work in any OO language, whether statically or dynamically typed. I repeat: all the type theory in the paper is just to prove that his technique works, and I strongly urge that anyone who is building an OO language take a look at this and adopt his idea.

Because internal calls to non-open methods are dispatched statically, developers can change the way these methods are called without affecting subclasses.

Not to infuriate the audience but to understand the point:
how is C# (or even old plain C++) falling short of this?

And does not freeze in various mixin/open class calculi do exactly that - promising no changes in binding?
Like this or this.

Or is the key phrase "subclasses cannot intercept internal calls and thus cannot become dependent on those implementation details"? Looks solvable by finalizing the method.

The last possibility is "By declaring a method 'open,' the author of a class is promising that any changes to the class will preserve the ways in which that method is called". The only problem with this (for me) is specifying these ways. As it seems that open methods are the only with dynamic dispatch, and the language (fix me?) does not enforce the author's promise, then I don't see why one cannot just say this whole idea is just a good practice to follow in any existing language with static dispatch being the default, instead of a new language feature.

Uh, sorry if spilling my coffee during the 7AM call made me a bit unreasonable. I will be glad to understand my mistake :-)

The way that Jonathan's technique differs from C#'s default or final methods is that it doesn't prohibit overriding in subclasses. You can -still- override a method in a subclass, and when an external client calls it, the method is dispatched dynamically in the usual fashion. However, within a class definition, any message sends to self are dispatched statically, unless the method is marked open (in which case it is always dispatched dynamically). This permits you to retain the flexibility of inheritance, but in a semantically sound way: that's why Jonathan was able to prove a parametricity result about his language. It's really simple and really clever.

I'm not familiar enough with mixin calculi to be able to compare, though.

It's a clever solution to the fragile base class problem, but I have a couple of remarks to do. In 1.2 he states that (based on Szyperski notes) "not all uses of inheritance can be replace by delegation because open recursion is sometimes needed". AFAICT that is untrue, at least according to Inheritance Decomposed. Also in paragraph 2.6 he writes that "An auxiliary analysis or type system could be used to verify that pure methods have no effects, including state changes (other than caches), I/O operations, or non-termination". Ignoring the non-termination part, is there any system that can separate caches from other state changes?

Take a look at Acar, Blelloch, and Harper's Selective Memoization -- they use a type system based on modal logic to give users fine-grained control over memoization.

Froelich notes that not all uses of inheritance can be easily expressed with delegation, but I'm not sure whether he means it's impossible or just hard, and I'm also unsure what he means when he says it's "just the dangerous" uses of inheritance that are hard to express. One thing's for certain, though -- I wouldn't even have considered making any serious use of inheritance before seeing Jonathan's idea.

It is perhaps worth noting that in Ada dispatching behaviour is controlled both my method and by call-site annotation. Class-wide routines (i.e., routines that receive parameters of a class-wide type) are (essentially) statically dispatched. Calls to dispatching routines (so called "primitive operations" of a type) made from inside other dispatching routines are also statically dispatched - unless a call-site conversion to a class-wide type is performed.

Take a look at Acar, Blelloch, and Harper's Selective Memoization -- they use a type system based on modal logic to give users fine-grained control over memoization.

The paper is nice but they don't do static analysis to prove that the only state changes the code performs are related to cache, as the original paper claimed it could be done. Actually the code (in Selective Memoization) doesn't exhibit mutable state only special keywords that have different semantics in the language.