Monday, April 29, 2013

Yesterday, John Reynolds passed away. I had the privilege of being
one of his graduate students, and much of what I know about what it
means to be a scientist is due to his example. I just want to share a
few small anecdotes about him.

When I was a new graduate student, I began working on the project
which would eventually become my thesis. When I described one of my
early approaches to this problem, he asked a series of questions
about how I would handle one sticky technical point after another,
and I showed him how the language and its semantics were carefully
set up so that this issue in question could not arise. After I
finished my explanation, he nodded thoughtfully, and told me,
"Neel, you've done some very good engineering, but engineering is
not science! Good engineers develop a sense of how to avoid
obstacles -- but a scientist's job is to face them head-on and
flatten them."

This made a really big impression on me. As academic computer
scientists, we have two great luxuries: we can choose what to
study, and we can spend as long as necessary studying it. As a
result, our ethical obligation is one step removed from solving
problems: our role is to produce understanding, so that
engineers (indeed, all of society) have new methods to solve new
classes of problems.

Once, John told me about some of his work at Argonne National
Laboratory, where he designed the COGENT
programming language, which he told me was what convinced him
that he should leave physics to do computer science. He said that
while today he could (indeed, many people could) design much better
languages, he would always regard it fondly, since it was his first
successful language design.

I asked him what made it successful, and he told me that there
was another scientist at Argonne, who used it to write some
simulation code. When John read the program listing, he realized
that he could not understand how the program worked, since it
relied domain knowledge that he lacked. So this was John's
definition of a successful language design: if you have a user who
has used it to write a program you couldn't have, your language has
succeeded, since it has helped a fellow human being solve one of
their own particular problems.

I've always liked his definition, since it manages to avoid an
obsession with nose-counting popularity metrics, while still
remembering the essentially social purpose of language design.

Another time, I asked John what he thought about object-oriented
programming. He told me he thought it was too soon to tell. I burst
out laughing, since I thought he was joking: Simula 67 is
considerably older than I am, and I thought that surely that was
sufficient time to form an opinion.

But he insisted that, no, he was not kidding! From a formal,
mathematical, perspective, he did not like object-oriented
programming very much: he had studied various formalizations of
objects in his work on Idealized Algol and on intersection types,
and found they introduced some unwanted complexities. However, so
many excellent engineers still thought that objects were a good
idea, that he was unwilling to dismiss the possibility that there
was a beautiful mathematical theory that we haven't discovered yet.

I thought this was a great demonstration of Keats's negative
capability, but what makes it really impressive was that John
married it to a profound intellectual rigour: I've never known
anyone who could produce counterexamples faster than John
could. While he tested every idea ruthlessly, he never closed
himself off from the world, giving him a rare ability to make
judgements without passing judgement.

I haven't really said anything about John's enormous scientific
achievements, because the deepest lesson I learned from him is not any
one of his theorems, but rather his fundamentally humane
vision of science, as something we do for and with our fellow
scientists and community.

Ars longa, vita brevis, but together we make a chain
through the ages --- and more links of that chain than most can claim
were forged by you, John.

However, Olle Fredriksson wrote us within one day of the draft going online with a link to his Haskell implementation. This was immensely reassuring, since (a) he implemented it so quickly, and (b) he didn't need to ask us how to implement it -- this means we didn't leave anything important out in the description!

In fact, Olle is technically the first person to implement the algorithm in the paper, since both Joshua and I have actually implemented variants of this algorithm for different languages. I'll try to write a small ML implementation to go along with Olle's Haskell code in the next few weeks. If you want to beat me to it, though, I have no objections!

Bidirectional typechecking, in which terms either synthesize a
type or are checked against a known type, has become popular
for its scalability (unlike Damas-Milner type inference,
bidirectional typing remains decidable even for very expressive
type systems), its error reporting, and its relative ease of
implementation. Following design principles from proof theory,
bidirectional typing can be applied to many type
constructs. The principles underlying a bidirectional approach
to polymorphism, however, are less obvious. We give a
declarative, bidirectional account of higher-rank polymorphism,
grounded in proof theory; this calculus enjoys many properties
such as η-reduction and predictability of annotations. We give
an algorithm for implementing the declarative system; our
algorithm is remarkably simple and well-behaved, despite being
both sound and complete.

In this paper, we show how to extend bidirectional typechecking
to support higher-rank polymorphism. Bidirectional typechecking is
attractive in part because it is very easy to implement, and
happily our extension remains simple -- it turns out the fanciest
data structure you need to support higher-rank polymorphism is an
ordered list!

Effective support for custom proof automation is essential
for large-scale interactive proof development. However,
existing languages for automation via tactics either (a)
provide no way to specify the behavior of tactics within the
base logic of the accompanying theorem prover, or (b) rely on
advanced type-theoretic machinery that is not easily
integrated into established theorem provers.

We present Mtac, a lightweight but powerful extension to
Coq that supports dependently-typed tactic
programming. Mtac tactics have access to all the features
of ordinary Coq programming, as well as a new set of typed
tactical primitives. We avoid the need to touch the trusted
kernel typechecker of Coq by encapsulating uses of these
new tactical primitives in a monad, and instrumenting Coq
so that it executes monadic tactics during type inference.

Since I wasn't the lead author, I see no need to be modest: this
is really, really, good. The power-to-weight ratio of
this approach to tactics is incredibly high, and I seriously think
that every dependently typed language implementor ought to consider
adding something like Mtac to their system.

Functional reactive programming (FRP) is an elegant approach to
declaratively specify reactive systems. However, the powerful
abstractions of FRP have historically made it difficult to predict
and control the resource usage of programs written in this style.

In this paper, we give a new language for higher-order reactive
programming. This language generalizes and simplifies prior type
systems for reactive programming, supporting the use of
first-class streams, such as streams of streams; first-class
functions and higher-order operations; and permits encoding many
temporal operations beyond streams, such as terminatable streams,
events, and even resumptions with first-class
schedulers. Furthermore, our language supports an efficient
implementation strategy permitting us to eagerly deallocate old
values and statically rule out spacetime leaks, a notorious source
of inefficiency in reactive programs. Furthermore, these memory
guarantees are achieved without the use of a complex substructural
type discipline.

We also show that our implementation strategy of eager
deallocation is safe, by showing the soundness of our type system
with a novel step-indexed Kripke logical relation.

You can download an almost
completely undocumented implementation to play with this
kind of FRP language. It's also good if you want to see what a
language with linear types looks like. Also, I use the type
inference algorithm described in the first draft. This doesn't
have dependent types (yet?), so no Mtac, though. :)