Invited talks

Banquet speech: Facilitating the Evolution of our
Collective IQ

Mankind's significant problems are growing larger and more complex at
an accelerating rate. Technologies that offer aid in their solutions
are emerging, also at an accelerating rate ― and they themselves are
introducing further complexity into our world.

And note: complex, urgent problems have to be dealt with collectively!

I happen to believe that if we don't increase our collective
capabilities significantly ― for recognizing,
understanding, and coping with complex, urgent problems
much of our civilization will be at risk of crashing and burning.

The scale of this challenge is so large that it can only be pursued
directly and effectively by following an appropriately scaled
improvement strategy.

And then I'll ask why the OOP people, who seem to have developed such
a superior way to deal with information objects, haven't already
solved this. Hmm, perhaps they'll join the pursuit?

On December 9, 1968, Douglas C Engelbart and a team of software
developers gave the first public demonstration of a computer with a
windows interface, videoconferencing, black on white text,
context-sensitive help, and a mouse. They delivered this demo to 4000
stunned spectators at the Falls Joint Computer Conference in San
Francisco. Although the demo system linked to a remote mainframe
computer, it sparked research that led to the development of personal
computers, the graphical user interface, and more-advanced
networks. It launched a disruptive revolution in the way people work,
communicate and produce. If not for Douglas Engelbart, many of the
technical innovations we consider vital to the personal computer
revolution would not exist.

Functional objects

At first glance, object-oriented programming has little or nothing in
common with functional programming. One grew out of the procedural
tradition, providing means for representing real-world objects and
mechanisms for encapsulating state. Computing proceeds via method
calls. The other is a radical departure from conventional
programming. It emphasizes a(n almost) purely mathematical
approach. Programmers design systems of algebraic datatypes and
functions, and a computation is the evaluation of an
expression. Still, nobody can overlook the similarities of the two
approaches considering the development of design patterns and notions
of effective object-oriented programming practices.

In my talk, I will compare and contrast the two ideas of programming
and programming language design. I will present and defend the thesis
that good object-oriented programming heavily "borrows" from functional
programming and that the future of object-oriented programming is to
study functional programming and language design even more.

Matthias Felleisen's career consists of two parts.
For the first ten
years (1984–1994), he focused on the development of a new form of
operational semantics and used this semantics to study design issues
in mostly functional programming languages. His form of operational
semantics, often dubbed evaluation context semantics, has become the
standard tool for studying the well-definedness of programming
languages (aka type soundness theorem). His work on continuation-based
control constructs and calculi of control has spawned small areas of
investigation in both control constructs and logic.

In 1994, Felleisen and his research group (PLT) began to work on the
development of a programming environment for novice programmers
(DrScheme). They use this software (and a curriculum they developed
in parallel) to inject true design principles into the introductory
programming curriculum. They use the software development project to
study problems in programming languages, software engineering, and
operating systems. Over the past ten years, Felleisen and his
collaborators have published numerous papers on object-oriented
design patterns, the nature of classes and mixins, the interaction
between classes and modules, extensibility in functional and OO
programs, and other matters of objects.

Felleisen spent most of his career at Rice University, with short
sabbaticals at Carnegie Mellon University (Pittsburgh) and Ecole
Normale Superieure (Paris). He is now a Trustee Professor at
Northeastern University, Boston.

Rich interfaces for software modules

Interfaces play a central role in the modular design of systems. Good
interface design is based on two principles. First, an interface
should expose enough information about a module so to make it possible
to predict if two or more modules work together properly, by looking
only at their interfaces. Second, an interface should not expose more
information about a module than is required by the first
principle. The technical realization of these two principles depends,
of course, on the precise interpretation of what it means for two or
more modules to "work together properly."A simple interpretation is
offered by typed programming languages: a module that implements a
function and a module that calls that function are compatible if the
function definition and the function call agree on the number, order,
and types of the parameters.

We present richer notions of interfaces, which expose in addition to
type information, also temporal information about software
modules. For example, the interface of a file server with the two
methods open file and read file may stipulate that read file must not
be called before open file has been called. Symmetrically, then, the
interface of a client must specify the possible sequences of open file
and read file calls during its execution, so that a compiler can check
if the server and the client fit together. Such behavioral interfaces,
which expose temporal information about a module and at the same time
impose temporal requirements on the environment of the module, can be
specified naturally using an automaton-based language. In other
situations, the appropriate notion of compatibility between software
modules, as suggested by the first principle of interface design, is
richer still and may require, for example, the exposure of
assertional, real-time, and resource-use information. This leads, in
turn, to push-down, timed, and resource interfaces. For
instance, resource interfaces can be used to ensure that no two
modules simultaneously access a unique resource.

We formally capture the requirements on interfaces by axiomatizing
interface theories. For example, the axiom of ïndependent
implementability"of interfaces guarantees that if A and B are
compatible interfaces, and A0 is a module that conforms to interface
A, and B0 is a module that conforms to interface B, then the
composition A0||B0 of the two modules conforms to the composite
interface A||B. For selected interface formalisms, such as behavioral,
push-down, timed, and resource interfaces, we show that they satisfy
the axioms of interface theories, and we discuss the following three
algorithmic problems:

Compatibility: given two interfaces, are they compatible?

Conformance: given an interface and a software module, does the
module conform to the interface?

Interface extraction: given an interface theory and a software
module, what is the interface of the module with respect to the
theory?

In particular, we show that the compatibility checking of interfaces
amounts to solving a game in which the interfaces and the unknown
environment represent players. Furthermore, we show that the
conformance relationship between a module and its interface must be a
contravariant one, which as in subtyping, treats inputs and outputs
differently. This distinguishes interface conformance from many formal
methods for stepwise refinement.

Tom Henzinger is a Professor of Electrical Engineering and Computer
Sciences at the University of California, Berkeley. He holds a
Dipl.-Ing. degree in Computer Science from Kepler University in Linz,
Austria, an M.S. degree in Computer and Information Sciences from the
University of Delaware, and a Ph.D. degree in Computer Science from
Stanford University (1991). He was an Assistant Professor of Computer
Science at Cornell University (1992–95), and a Director of the
Max-Planck Institute for Computer Science in Saarbruecken, Germany
(1999).

His research focuses on modern systems theory, especially
formalisms and tools for the component-based and hierarchical design,
implementation, and verification of embedded, real-time, and hybrid
systems. His HyTech tool was the first model checker for mixed
discrete-continuous systems.