interface.pm explained

Recently I'd been bothered by Perl's seeming lack of
a feature I really wanted: Interfaces. I gotta admit I
couldn't figure out to implement this feature myself, but
discovering interface.pm on CPAN and
figuring out how that worked was a real pleasure.

Interfaces

For those not familiar with the term, an
interface (popularized by Java; OOP purists might
say protocol) is a set of methods that a class
takes upon itself to implement. The programming language
enforces this agreement, preferably at compile time. An
interface is like a base class, except that it contains
only abstract ("pure virtual") methods, and even people who
frown at multiple inheritance tend to accept a class which
implements more than one interface. Interfaces
provide a way for a module of code to say "I can do
this" (and do it its own way, but giving other modules
a standard way of asking for it). An interface can also
extend another interface.

One well-known example of an interface is
Cloneable: a class that implements it must
provide a clone method which returns a
copy of of its invocant. Naturally, you could implement
cloning from outside of a class by using something like
Storable's dclone method on an
object, but that might not work correctly in some cases.
For example, older versions of Storable would not maintain
the tiedness of members. Also, some class data might
rightfully need to be updated upon cloning: it's good
design to let the class encapsulate that logic. So once
again, if you design with interfaces, your class declares
itself as implementing the Cloneable
interface, and defines a clone method that
does precisely the right thing for your class. No more,
no less; no messy multiple inheritance.

Doing it in Perl

So what's the problem with doing interfaces with
Perl? The only thing you need to do, it seems, is to define
stub methods in the "base" interface:

Then, if your class says use base 'Cloneable';
but neglects to override clone, you'll get
a descriptive error message when you do call it. One
disadvantage of doing it this way is that you have to
repeat the body of the abstract method for each such
method. But the real problem with this is that the
error comes too late: perhaps you never even wrote the
code that uses your clone, but you still want the
language to raise an error to the effect that you haven't
fulfilled your end of the contract by supplying the code
yourself. You want a compile-time error!

The problem making this a compile-time error is that
Perl doesn't give you any natural hook to look at a
package's defined methods at the right time. You want to
be able to say use interface 'Cloneable'; and
not worry about it, right? But if you put that at the top
of your package, perl will load interface.pm
while your package is still being compiled —
before your methods have been registered. If
interface.pm looks at your module at this
stage, it will complain that you haven't implemented the
methods, even though you did!

Perl's specially timed blocks don't help here,
either. BEGIN is obviously premature, since
no matter where we put it, it'd cause our hook to be run at
(something's) compile time: too early. perlmod
tells us that INIT and CHECK
blocks "are useful to catch the transition between
the compilation phase and the execution phase", but in
order to have the interface check happen during either
of these you need to split the use call
to a require and an import --
resulting in the unwieldy

INIT { require interface; interface->import("Cloneable") }

Not very elegant, since you have to put this in your
client code, repeating it whenever you implement an
interface. But it looks as if this can't be done any
better!

But it turns out that it can.

Enter the clever hack

What interface.pm does is check whether the
appropriate methods are implemented in the calling class
(or one of its parents) during the import
hook. But as we stated above, this is too early because
the methods haven't been compiled yet. What to do? Get
them to compile! import contains this code (edited):

The eval line there merely causes Perl to
finish the compilation of the calling module; none of its
actual functionality is used — only looked
at. To avoid infinite loops, interface.pm
maintains a %locks global that keeps
track of which calling packages are in the middle of
validation. %locks is a hash and not a simple
bit lock so that it works with complex hierarchies and
many calls to interface.pm. Every module is
guaranteed to get its interface validated (think why!). The
calling code simply says

use interface qw(Cloneable Describable SomeOtherInterface);

And that's it.

Conclusion

To be successful as an extension to the language, a
framework that enables interfaces must use simple syntax,
preferably familiar syntax. What's more familiar to the
Perl hacker than a use statement? The module
I described here achieves this usage goal, as well as its
main operative goal of compile-time interface satisfaction
checking. True, it uses a hack to get there, but the hack
is nicely encapsulated so that the user doesn't need to
know about it. An alternative implementation of this
feature would be to use source filters, but these are
probably more hackish and have their own problems.

Perhaps a better question is "what's the problem with doing interfaces in Java?" The answer is "plenty." By understanding why Java has interfaces in the first place (something you alluded to but didn't go into) we can better understand the interface and it's potential relationship to Perl.

When Java was created, its designers realized that multiple-inheritence (MI) is a source of many bugs. As with many other things in Java, the designers decided that things that are bug prone will be handled with a straightjacket and strict discipline. This bondage-and-discipline approach to the language led the designers to conclude that MI is so problematic that, rather than trust programmers to be able to wisely use their judgment, they'll just take it away. Instead we get the cruft that is the interface.

The big win with MI, of course, is software reuse. Now, instead of reimplementing that foo() method, we can simply inherit from Yet Another Class and get the foo() method for free. Of course, if that other class really is not an appropriate base class, we might wind up with compositionally unsound classes. Delegation can sometimes solve this, but not always. We also wind up with ordering problems in MI and other subtle design issues that are beyond the scope of this discussion.

So Java says "we do not trust you with MI, you must use interfaces instead." And this is when Java programmers discover the mixed blessing of interfaces: they're a great tool for maintaining consistent interfaces across various classes, but the completely destroy the idea of software reuse. Every time you use an interface, you must reimplement what is essentially the same method. Using interfaces in Perl means that not only do we have to reimplement the same methods, we don't even get the benefit of declaring a signature or a return type, thus eliminating one of the few benefits that interfaces really provide.

Ruby has also tried to solve the MI problem by using mixins. They're nice, but they have the same ordering problems as MI and this can be frustrating. A better way of solving this problem is to use traits, an implementation of which can be found on the CPAN as Class::Trait. Perl6 will have "roles" which are essentially traits. Unlike Ruby and Java, however, MI will not be forbidden (to me it seems silly to throw away a problematic but useful tool in favor of one that's unproven.)

That's not to say that I'm totally down on interfaces, but I'm not a huge fan of them. Java showed us a brilliant way to not solve the MI problem and I can't say that I entirely trust them in Perl. I do use them, however. Quite often I don't give a fig about what class an object is, but I do care about whether or not it can respond to a particular set of methods. I sometimes have classes store an object in a slot and delegate several methods to that object. Frequently I don't care what class is in that slot but the object had better respond when I call the methods that I need. Perl's poor argument handling still limits the utility of this, but then, if we're insisting that everything be perfect, we wouldn't be using Perl, would we?

I wasn't familiar with traits. If I correctly understand what they are, then:

Traits are like interfaces in that they provide a way for a class to promise it fulfils some {interface, protocol} — that is, to say "I can do this".

They are like a base class in that the trait itself can provide some implementation code of its own. (Perl traits also allow operator overloading definitions.)

Unlike interfaces and base classes, but like mixins, all entities are pushed into the consuming module's namespace.

You suggest that traits are better than interfaces because they allow some code reuse; and are safer than multiple inheritance because (I presume) they avoid ordering problems. Since I'm new to this and haven't tried it out yet: can a class that consumes a trait override the code it receives from the trait? (Is this definable formally? Is it encouraged?) Apart from moving them to deterministic compile-time, how do traits help resolve ordering problems associated with MI? Do you simply get compiler errors when redefining subs/interfaces? What means does the programmer have to resolve these conflicts? (I saw something called "aliases", but if that's a proposed solution to the problem then I'm not sure how it works; I thought the strength of interfaces was that foreign code knows precisely which method to call and can assume that method fulfils a particular interface.)

Also, an implementation question. Suppose I have class Base that uses traits. Then along comes class Child. Where does it get its traits from? Presumably from the parent class, right? But then the traits aren't truly mixed in. The main problem with that is that it gets hard to specify (and implement) what should happen when Child consumes another trait of its own, that conflicts with something from Base.

I'm new to this, so please go gentle if there are obvious answers to these questions :)

(I planned the original post because I liked the clever implementation described, not because I wanted to advocate the use of interfaces; but if a language debate springs up all the better! There's a lot for me to learn here.)

(I realize you understand the ordering problem. I explain the following for the benefit of those who might be reading and not know.) Your class needs to explode() from Bomb, but unbeknownst to you, the GirlFriend class also has an explode() method! Now, because you used GirlFriend first in your use base ... statement, you get the wrong behavior and this can be very difficult to debug. Of course, if there's another duplicate method in your two base classes but you don't want the one in your Bomb class, you're life gets even more difficult. Now you need to use delegation or start hard coding classnames, neither of which is quite as easy as simply inheriting the data was supposed to be.

Suppose I have class Base that uses traits. Then along comes class Child. Where does it get its traits from?

The Child class should not know or care about where Base gets its methods. Are they traits or implemented directly? It should not matter. All Child needs to know is the published interface to Base. That's when the Child class can decide whether or not to override Base methods. Whether this is done through writing the methods directly or use of traits should not matter at this point.

One final comment: some of my code snippets above can look daunting to those unfamiliar with traits. In reality, most traits are pretty straightforward.

Hello gaal, Ovid pointed me to this discussion, and since I am the author of Class::Trait, I thought I might pipe up and answer some of your questions.

Traits are like interfaces in that they provide a way for a class to promise it fulfills some {interface, protocol} — that is, to say "I can do this".

Actually, I think you need to invert your thinking on that. To start with, Traits aren't really classes, and to think of them as such may get you into trouble with them. And to think of Traits as being too much like interfaces and mix-ins, might also lead to problems. Traits are more akin to "deferred classes" as found in Eiffel. They are not meant to be complete, or to be able to stand on their own, they are building blocks for classes, but not really classes themselves. This node includes some documentation I originally wrote for Class::Trait, but ended up not including, it describes a formal langauge/calculus for traits (ripped off from on of the papers, I certainly didn't come up with that myself).

But, anyway, back to the point. Traits themselves have requirements which must be fufilled by the class that chooses to use them. After the requirement is fufilled, and the trait is flattened into the class, nothing else happens. I did however, implement a is method into Class::Trait, which is added to the class using the traits. This is to be thought of being like isa is in perl, it will do a depth-first search of all the traits the class has used and return true of false depending upon whether the class used that trait.

But here is where Traits differ from interfaces/protocols. After the class has used the trait,and it is incorporated, to all outside parties it is like nothing ever happened, and you implemented the methods yourself in the class directly. There is no implied contract between the class which used the traits and any other object in your system.

They are like a base class in that the trait itself can provide some implementation code of its own.

Again, best to not thing of Traits as classes. They are simply a set of methods collected into a grouping, which can be added into a class. Other than that though your statement is correct.

Unlike interfaces and base classes, but like mix-ins, all entities are pushed into the consuming module's namespace.

Yes, this is true. It is called flattening. Although I have to say, I didn't know that is what is done with mix-ins, although my experience with them is limited.

Apart from moving them to deterministic compile-time, how do traits help resolve ordering problems associated with MI?

There is no inheritance to be had, so no diamond problem can occur, this is the benefit of flattening the methods into the consuming class. Of course, I cannot guarantee that someone could not figure out a way in which MI problems might creep in, and therefore break Traits. But to the best of my knowledge, they avoid the issues of MI, by avoiding inheritance in general.

Also, when multiple traits are added into a class, they are first combined into a composite trait. The rules for combining traits are described best in the paper "Traits - A Formal Model", which can be found on this page, a lesser description can be had in Re: Traits as Method Exporters, which I linked to above. The rules are grounded in set-theory and other mathematical esoteria of which I only know a limited amount about. The point is that you don't have the same rules when combining traits as you do when you combine classes with MI, and no more than one trait is ever combined into a class.

Do you simply get compiler errors when redefining subs/interfaces?

When you redefine them where? When combining Traits into a composite trait, method conflicts result in the exclusion of both methods, and that method's label is then added to the requirements list.

The idea here is that you should manually resolve conflict up front with the exclude and alias options, and if one were to creep in unexpectedly, Traits make no claims to know what you meant to do, and so defer it back to you. The result is that when your class uses your trait, and does not fulfill the new requirement (created from the method conflict in the the composite trait) your compiler goes BOOM.

As for the method conflict in a class and a trait, Ovid actually explains that. Remember Traits are not classes, and traits are subservient to classes.

What means does the programmer have to resolve these conflicts? (I saw something called "aliases", but if that's a proposed solution to the problem then I'm not sure how it works;

Aliasing allows you to rename a method, which can avoid a conflict since conflicts are checked based on method label (and if labels match, we also check to see if the code reference is the same too before deciding it is truly in conflict). Aliasing simply changes the label, nothing more, nothing less.

I thought the strength of interfaces was that foreign code knows precisely which method to call and can assume that method fulfills a particular interface.

Again, traits are not interfaces, and they have no contract outside of their relationship with the class that uses them.

Also, an implementation question. Suppose I have class Base that uses traits. Then along comes class Child. Where does it get its traits from?

It doesn't get them from anywhere (as Ovid says). The fact Base uses traits, is not known to the Child. Its is Base's concern only, as far as Child can tell, Base implemented its methods on its own.

what should happen when Child consumes another trait of its own, that conflicts with something from Base.

It won't conflict, at least not in the way I think you are thinking it will conflict. As far as Child knows, Base implemented its own methods. Any traits which Child uses are mostly unconcerned with Base. However, there are subtleties to this. If you are interested look at the test file "50_Trait_SUPER_test.t", which uses t/test_lib/Read.pm , t/test_lib/SyncRead.pm and t/test_lib/TSyncRead.pm. It is an implementation of an example given the papers about how Traits deal with and relate to the super class of the class that uses them.

Hmmm, I'm not sure about this; did you test it? (I'm at work now so I can't check myself.)

When would interface.pm's INIT block run? Will it not happen when *it* finishes compilation? In that case, your approach wouldn't work: it's still too early, happening before the calling class finishes loading.

Also, it's not too dangerous ignoring $@ in the real module. If there are errors, surely they'll pop up again when flow returns to the caler. (No?)

I didn't test it but INIT kicks in when everything is finished compiling, perlsub -

"INIT" blocks are run just before the Perl runtime begins execution, in
"first in, first out" (FIFO) order. For example, the code generators
documented in perlcc make use of "INIT" blocks to initialize and
resolve pointers to XSUBs.

So various modules can register INIT blocks. The INIT blocks are all postponed until just before perl starts executing, at which point it runs all the INIT blocks in the order they were registered and then finally it goes to line 1 of your program.

$@ is the only place where an error from an eval will show up, if you don't do something about it, the error will disappear forever, that's what makes it possible to ignore an error with eval. Of course if there is something wrong with the module and we ignore the error, the program will probably fail anyway however it will fail with a confusing error message. For example if mod.pm is