Post navigation

Nice, thanks! I wonder if you could say anything about how Andromeda compares to a system like Twelf, which also has both a meta-level and an object-level? I guess Andromeda’s meta-level is general-recursive and simply typed whereas Twelf’s is terminating and dependently typed, and Andromeda’s object-level has certain primitives built-in whereas Twelf has the flexibility to represent essentially any object theory. Are there other similarities or differences?

@Mike: Peter Aczel asked a similar question, and I think it’s a very good one. As far as I can tell, once we remove the “Type in Type” hackery, we’ll get something like “LF with equality”. Your comparison is not quite right. Andromeda’s object-level is strictly more expressive than LF. You can represent essentially any object theory in it, together with desired judgmental equalities, which is not doable in LF. The meta-level of Twelf is very rudimentary (it’s the stuff you write with the % sign in front), whereas Andromeda’s is a general-purpose programming language. So perhaps one way to characterize Andromeda would be “LF with object-level judgmental equalities + general-purpose ML”

Sometimes people draw a distinction between logical frameworks and metalogical frameworks. I don’t get the terminology, but the idea is that logical frameworks are for specifying then *using* logics, whereas metalogical frameworks are for specifying then *studying* logics. Twelf is primarily intended as a metalogical framework. Isabelle seems to be the main logical framework. Andromeda seems more like a logical framework than a metalogical framework, so it might make more sense to compare it to Isabelle. (I need to learn Isabelle.)

Ok, I’m confused now. At what level do Andromeda’s dependent Pi types live? I thought you were saying they were part of the object theory; if the meta-theory is ML (which is simply typed) then they can’t be there, can they? But there are plenty of theories one might want to represent that don’t have Pi types. Or did you switch at the point of introducing Pi types to describing just one *particular* object theory that could be represented in Andromeda?

Also, I would not say that the metatheory of Twelf is only the % commands; there is a whole dependently typed language with Pi types and one universe. I feel like its % commands are almost like a “meta-meta-language” for automatically proving things about representations of some object language in the metalanguage.

Hi Mike, it looks like you’re right. The Twelf tutorial calls LF the meta-language. I find that weird. LF gets extended to correspond to the object language, so I wouldn’t have thought it counts as “meta” personally.

Regardless of terminology, I think Andrej is comparing Twelf’s LF to Andromeda’s extensional dependent type system, and Twelf’s inductive metatheorem system to AML. (The metatheorem system and AML are completely different, and they have very different purposes.)

Isabelle has (a restriction of?) HOL at the LF level, and an ML flavor above that.

I’m not sure what you mean by “LF gets extended to correspond to the object language”. The types of LF generally represent judgments in the object language, so I can’t think of any sense in which LF itself “corresponds” to the object language.

Are you saying that the way one actually codes in Andromeda is to use the extensional dependent type system as a meta-theory to describe some other object theory, the way one uses LF in Twelf?

(By the way, as I’m sure you know, the relative weakness of LF vis-a-vis something like MLTT is intentional; in particular it enables the use of higher-order abstract syntax to describe variable binding in object theories. If Andromeda’s extensional dependent type system is too powerful, then that trick wouldn’t be available therein, making encoding object theories inside it more cumbersome.)

Mike, it looks like you already knew what I mean, and the way I expressed it just threw you off. Sorry. By “extended”, I mean by declaring constants in a signature. By “correspond” I just mean the judgments-as-types representation.

> Are you saying that the way one actually codes in Andromeda is to use the extensional dependent type system as a meta-theory to describe some other object theory, the way one uses LF in Twelf?

(I’m not sure this was addressed to me, but…) That is what the Andromeda guys have been doing to get type constructors other than Pi and equals. I wrote them an email pointing out the similarity to logical frameworks, and saying I think it makes a lot of sense, and that it will make even more sense once Type:Type is dropped. They were already aware of the similarity.

Your point about a danger with Andromeda’s type theory being too strong is right. I’ve given it some thought, and I’m pretty sure that there are at least two weak, extensional type theories that make sense for HOAS and judgments-as-types. One is basically just LF + extensional equality. (There was much more detail about this in my email to them.)

I am curious how it was decided to use rules based on context joining. It seems to be the right choice *provided* you’re computing contexts “on the way up” starting at the leaves of the expression tree. But it’s typical to compute contexts “on the way down”, passing *the same* context (but with new bindings when appropriate) to each subexpression when it’s checked. Tactic systems are also usually made to work this way, or so it seems, with the context already known for any given hole that is to be filled.

The standard typing rules need to be adapted for this top-down context checking, but I had assumed there would be no problem making this adaptation, since it seems standard. Does it somehow go wrong for ETT? (Works fine for Computational TT.) Or is there some other reason to go bottom-up? Please explain. Thanks!

The discussion suggests that there is confusion about what is Twelf and what is a logical framework. In Twelf you can both build derivations in your object logic and also reason about the properties of that object logic. We did a complete verification of the soundness of the SML type system in Twelf. This means encoding the statics and dynamics of SML as logics, and proving meta-theorems about them using the facilities in Twelf for reasoning about logics. Twelf makes it particularly easy to prove Pi2 properties of logics, which is quite enough for the purposes of that project. It is not at all correct to say that Isabelle is a logical framework and that Twelf is something else.

Instead of proving Pi2 sentences you can work with realizers of them. This would be functional programming over the object logic. That is what Delphin and Beluga are all about. It’s partly a matter of taste whether to prefer a relational formulation or a functional formulation of the metatheory. We’ve found Twelf quite adequate for many large-scale purposes (30K lines of code to do the verification mentioned).

I don’t see how judgments-as-types could be anything but using LF as a metatheory to describe an object theory. Judgments are not types inside the object theory, so if they are LF types then LF must be the metatheory.

> I don’t see how judgments-as-types could be anything but using LF as a metatheory to describe an object theory. Judgments are not types inside the object theory, so if they are LF types then LF must be the metatheory.

For me, a metalanguage must be used to work with the syntax of the object language. In Twelf, LF doesn’t do this; the totality checker (the thing for proving metatheorems) essentially ends up doing this, in a logic programmy sort of way. The syntax of LF itself–extended with some signature–is manipulated. There are only two levels, LF is the lower level, and assuming a signature in LF is no more meta than assuming axioms in Coq. The original object language (as opposed to the LF representation) is not formally involved.

LF totally does work with the syntax of the object language. When we define STLC inside Twelf like here, we are describing the syntax of STLC, with things like application and abstraction, using LF as the metalanguage. We could do the same thing in Coq. It’s true that we don’t state theorems about the object language inside LF, and that the meta-meta-language of % commands proves things about the object language by proving things about the metalanguage LF, but LF is still the metalanguage used to define the object language; the point is just when we write “arrow : tp -> tp -> tp”, the arrow “->” of LF is one level higher than (and, in particular, different from) the “arrow” of STLC.

I think Andromeda cannot be used to prove meta-theorems about SML the way Twelf does. Or to put it differently, perhaps it can if someone writes AML code which essentially does what Twelf does, but correctness of such a prover would depend on the user not assuming certain things. I need to think about this.

We really are going back to LCF-style here. There’s a base theory which the user may extend with a signature (a bunch of constants) and then can write programs that compute judgments. That’s all there is, really. When I said that the system would be like LF, I meant to say that the base theory (Pi’s and equality types) would look a bit like LF’s kinds and families.

What you are taking for granted, and not defending, is that the relationship of LF types to object language types is the same as, or at least analogous to, the relationship of metalanguage types to object language types.

It simply doesn’t seem that way to me. (And apparently not to Bob either, from his above comment.)

With judgments-as-types, the whole business is syntactic. The syntax of the object language is represented using the syntax of LF. With metatheory, you have a traditional logic, with syntax and semantics, and you represent the syntax of an object language using (metalanguage) objects whose meaning is just syntax. LF has nothing at all to say about meaning. For its use as a logical framework, LF terms may as well be meaningless. The whole damn language is just a fancy syntax notation.

(It turns out you can try to ascribe a meaning to judgments-as-types representations. But it’s probably not a good idea in the case of LF. Many adequate representations would not yield the intended semantics of the object logic. With extensional LF, OTOH, analytic-style representations seem to assign the right meaning to ordinary (not modal or substructural) dependent type systems with unique typing. This can be seen as a generalization of universe coding. Meanwhile, in Isabelle, a synthetic-style judgments-as-types representation into HOL can be seen as assigning a refinement-style semantics to pretty much any object language.)

> LF totally does work with the syntax of the object language. When we define STLC inside Twelf like here, we are describing the syntax of STLC, with things like application and abstraction, using LF as the metalanguage.

and

> …but LF is still the metalanguage used to define the object language; the point is just when we write “arrow : tp -> tp -> tp”, the arrow “->” of LF is one level higher than (and, in particular, different from) the “arrow” of STLC.

> I think Andromeda cannot be used to prove meta-theorems about SML the way Twelf does. Or to put it differently, perhaps it can if someone writes AML code which essentially does what Twelf does, but correctness of such a prover would depend on the user not assuming certain things.

It seems that way to me too.

> I need to think about this.

Here’s an idea I had at some point, if you’re going that way. Twelf-style metatheorem proving seems like it needs a dramatic overhaul to do more impressive metatheory, like normalization, logical relations, and semantics.

But, step 1, you can assume another signature to be used as a richer metalanguage. Step 2, you represent the object language in (the representation of) the metalanguage, in some appropriate way. Step 3, you prove the metatheorem in the new metalanguage.

But how do you know the object language representation in the new metalanguage agrees with the original representation you want to use? Step 4: use a Twelf-style metatheorem to formally establish the adequacy of one representation w.r.t. the other!

Step 2 becomes much easier if the logical framework has support for quoting. (In regular LF it wouldn’t be needed, but in extensional LF, we need a way to suppress the equations, to get back to syntactic equality.)

Hmm, I think I see your point, but I still regard it as some kind of metalanguage.

I don’t think there is anything to “defend” about why the relationship of LF types to object language types is analogous to that in other metalanguages: it’s literally the same, you have an LF-type that represents each judgment, just as when describing the syntax of type theory as an initial algebra for a many-sorted algebraic theory in ZF you would have a ZF-set of derivation trees for each judgment.

I think that your point is not about the relationship between types; it’s about the fact that when we describe the object language in LF using HOAS, the reason that that description is correct is a meta-theorem involving the weakness of LF (e.g. no inductive types); whereas in the case of ZF we can prove the correctness of that representation internally to ZF.

So I agree, there is something different about it, but I also think it’s reasonable to consider it as a (different sort of) metatheory.

> I don’t think there is anything to “defend” about why the relationship of LF types to object language types is analogous to that in other metalanguages: it’s literally the same, you have an LF-type that represents each judgment, just as when describing the syntax of type theory as an initial algebra for a many-sorted algebraic theory in ZF you would have a ZF-set of derivation trees for each judgment.

I don’t see it that way. A type of derivation trees is an inductive type. A judgment-as-type is an abstract type. Actually, I’m not sure a weak system like LF is a true requirement for judgments-as-types. What goes wrong if you assumed the same signature in Coq? It seems nothing goes wrong, because of parametricity. It makes adequacy less direct because you need to invoke parametricity. (OTOH, with LF, you’re still implicitly using strong normalization.) Clearly, these abstract types are useless for carrying out metatheory internally to Gallina. After all, judgments-as-types is not metatheory.

> I think that your point is not about the relationship between types; it’s about the fact that when we describe the object language in LF using HOAS, the reason that that description is correct is a meta-theorem involving the weakness of LF (e.g. no inductive types); whereas in the case of ZF we can prove the correctness of that representation internally to ZF.

I’m not sure what correctness property you’re talking about. It doesn’t make sense to prove adequacy internal to ZF unless you have multiple formal definitions of the language internal to ZF.

I think we are just arguing about the meanings of words. The fact that a particular metatheory is not very useful for proving substantive theorems about the object language without passing to a meta-meta-language does not, in my idiolect, make it itself any less meta. The fact that LF-types represent judgments of the object theory, which are not internal statements in that theory, means to me that LF must be a metatheory, even if it is a very different sort of metatheory than ZF.

I also think there’s not much point to continuing the discussion about LF. What I would like to understand is exactly how Andromeda works. I think I have a better sense for that now, but it would help if Andrej could clarify a couple of things: when you said “essentially any object theory”, did you mean “essentially any object theory with Pi and equality types”? And what do you mean by “write programs that compute judgments”? (I don’t know anything about LCF.)

Yes, I meant “essentially any object theory with Pi and equality”. If you want a weaker theory you’d have to prove conservativity of Pi and equality over it.

In an LCF-style prover you get a general-purpose programming language. You can implement Tetris in it if you wish. The language allows you to implement all sorts of tactics and procedures etc. You can (presumably) even implement an interactive mode.
Once basic facilities are implemented, you can use the system like a traditional proof assistant. But at every point you can go back and tweak things or add another low-level facility.

Perhaps you can look at some examples and ask questions about them. Here’s a small example that can start a discussion (to discuss the example specifically, please use the Gist interface).

>I am curious how it was decided to use rules based on context joining. It seems to be the right choice *provided* you’re computing contexts “on the way up” starting at the leaves of the expression tree. But it’s typical to compute contexts “on the way down”, passing *the same* context (but with new bindings when appropriate) to each subexpression when it’s checked. Tactic systems are also usually made to work this way, or so it seems, with the context already known for any given hole that is to be filled.

>The standard typing rules need to be adapted for this top-down context checking, but I had assumed there would be no problem making this adaptation, since it seems standard. Does it somehow go wrong for ETT? (Works fine for Computational TT.) Or is there some other reason to go bottom-up? Please explain. Thanks!

It’s because of handlers. They let us write something like this:
>try lambda x : A, raise (Error x)
>with Error e => e
>end
which evaluates to a fresh free variable of type A.
If you compute the context on the way down you get the judgement |- x : A which is really screwed up if x is a deBruijn index and still screwed up if it’s some sort of name.

> I think we are just arguing about the meanings of words. The fact that a particular metatheory is not very useful for proving substantive theorems about the object language without passing to a meta-meta-language does not, in my idiolect, make it itself any less meta. The fact that LF-types represent judgments of the object theory, which are not internal statements in that theory, means to me that LF must be a metatheory, even if it is a very different sort of metatheory than ZF.

But it is not. You are confusing the ML with the object theory. If we did that with ML functions then it would be crazy. With lambda expressions we’re just fiddling with syntax trees. Lambda expressions are *not* functions or closures.

Here’s what I’m thinking. When building object language contexts on the way down. It’s like the metalanguage has an extra/enriched stack for bound variables in the object language term currently under construction. (Think of Ltac.) When the bound variables are metalanguage values, you have to worry about them escaping the extent (stack frame) in which their binder is still around.

One kind of solution would be to somehow prevent escape. You solution was instead more like giving up the stack discipline for binders, and using a more heap-like discipline so that validity of a bound variable is not tied to a particular metalanguage extent.