In the theory of programming languages and structural proof theory, one of the handiest techniques we have available is a method called "logical relations", in which you can prove properties of languages by taking the type structure of the language, and defining families of predicates or relations by induction on the type structure.

For example, if you want to prove that all definable functions in a language halt, you start by giving a predicate $P_X$ picking out the terms of base type $X$ which halt. Then, at higher type (say $A \to B$) you define $P_{A \to B}$ to pick out those terms which halt, and which additionally take elements of $P_A$ to elements of $P_B$. (You need this extra strength because otherwise a term $t$ of type $A \to B$ may evaluate to a function value, which diverges when given an input.) Then you prove a theorem showing that every definable term is in the predicate, and there you go.

This is a very natural technique, and I've often wondered where it shows up in the rest of mathematics. Recently, I learned that category theorists know it too, and call it by various names -- I listed the ones I was able to find in the question. Since (a) categories seem to serve as an interlingua for mathematics, and (b) lots of mathematicians hang out here, I figured I could ask: what other branches of mathematics has this technique been used in, and for what?

Dear Neel, I am little bit confused: Is there a difference between "logical relations" and structural induction over the types of your term system?
–
alexodMay 15 '10 at 18:45

In a logical relation, you define a family of predicates/set by recursion over the structure of types -- ie, you give a recursive function $\mathrm{SyntacticType} \to \mathrm{Set}$ (here, sets of terms). Such a definition becomes "logical" when you define this function to ensure that the terms you pick out appropriately respect the desired categorical structure of each type constructor. Categorically, we're "gluing along the hom-functor" of the syntactic category of types and terms. The explicit recursion goes away here, which gives me hope for a broader set of analogies.
–
Neel KrishnaswamiMay 15 '10 at 19:50

Neel, out of curiosity, did you find out about this connection in Coquand and Dybjer's "Intuitionistic Model Constructions and Normalization Proofs"? I also recently stumbled upon that paper.
–
Noam ZeilbergerMay 19 '10 at 7:34

I found out about it in Hyland and Schalk's "Glueing and Orthogonality for Models of Linear Logic". This led me to Mitchell and Scedrov's "Notes on Sconing and Relators", which I found to be a good exposition of how logical relations are an instance of sconing.
–
Neel KrishnaswamiMay 19 '10 at 7:52

2 Answers
2

This is a powerful technique that can prove consistency, conservativity (that a statement about a small system
which is a theorem in a more expressive one was already a theorem of the smaller one) etc. Applied to programming languages, it can show that if the result of a program in its denotational semantics is a number (as opposed to undefined) then when you run the program it is guaranteed to terminate (maybe after the Sun has gone supernova) and return that number.

It works by tying the syntax and the semantics together in lock-step, so that (maybe easy) observations about the semantic structure have direct consequences for the existence of a proof.

See
Section 7.7 in my book "Practical Foundations of Mathematics" for one categorical treatment, although there is a vast literature in theoretical computer science about this.

Of course, the construction uses structural recursion over the syntax. Amongst its consequences are consistency results. For anyone aware of Godel's incompleteness theorem, this should set some alarm bells ringing.

The solution is that the semantic structure (often a Grothendieck topos) is logically much stronger than the syntactic one. If, for example, the latter is the logic of an elementary topos then the former must enjoy (some fragment of) the axiom-scheme of replacement.

PS The actual categorical construction is extremely simple. The "lock-step" property has to be proved as a theorem for each type constructor (eg function-spaces)
and is valid in many cases, although not higher order logic.

Hi Paul! I'm a computer scientist, so CS & logic applications are familiar to me. However, I'm mostly curious whether there are applications outside these areas -- after all, the very name "glueing" suggests some kind of topological connection. Unfortunately, the TCS literature is so vast that it dominates all Google searches.
–
Neel KrishnaswamiMay 24 '10 at 7:59

2

Yes, Neel, there are other applications, but the thrust of your original question seemed to be logical, so I answered that. I was also leaving it to other, better qualified, people to give the older applications in algebraic geometry, topos theory, etc. The original categorical construction appears in SGA 4, expos\'e IV, section 9.5 and is attributed to Michael Artin. The application to logic is due to Peter Freyd. I expect that there is a lot about it in Peter Johnstone's "Sketches of an Elephant" and you would probably find out more by asking on "categories".
–
Paul TaylorMay 24 '10 at 10:43

Freyd invented the word "scone", as a corruption of "Sierpinski cone", so it is pronounced "scown" and not "scon".
–
Paul TaylorMay 24 '10 at 12:25

I think the terminology "gluing" comes from the following example. Let X be a topological space, let U be an open subset of X, and let $K=X\setminus U$ be its complementary closed subset. Then the inclusions of U and K into X induce geometric morphisms $p:Sh(U)\to Sh(X)$ and $q:Sh(K)\to Sh(X)$. The composite $q^* p_* :Sh(U) \to Sh(K)$ is then left exact, and the "gluing construction" applied to it (i.e. the comma category $(Sh(K) / q^* p_* )$) recovers $Sh(X)$. In other words, the functor $q^* p_*$ tells you how to glue together $U$ and $K$ to get back $X$, and the comma category is what does the gluing. This generalizes to complementary open and closed subtoposes of any topos, not just of sheaves on a topological space.