In our last installment in this series on Stone duality, we introduced the notion of Heyting algebra, which captures the basic relationships between the logical connectives “and”, “or”, and “implies”. Our discussion disclosed a fundamental relationship between distributive laws and the algebra of implication, which we put to work to discover the structure of the “internal Heyting algebra logic” of a topology.

I’d like to pause and reflect on the general technique we used to establish this relationship; like the Yoneda principle and the Principle of Duality, it comes up with striking frequency, and so it will be useful for us to give it a name. As it turns out, this particular proof technique is analogous to the way adjoints are used in linear algebra. Such analogies go all the way back to work of C. S. Peirce, who like Boole was a great pioneer in the discovery of relationships between algebra and logic. At a deeper level, similar analogies were later rediscovered in category theory, and are connected with some of the most potent ideas category theory has to offer.

Our proof that meets distribute over sups in the presence of an implication operator is an example of this technique. Here is another example of similar flavor.

Theorem: In a Heyting algebra , the operator preserves any infs which happen to exist in , for any element . [In particular, this operator is a morphism of meet-semilattices, i.e., , and .]

Proof: Suppose that has an inf, which here will be denoted . Then for all , we have

if and only if

if and only if

(for all , ) if and only if

for all , .

By the defining property of inf, these logical equivalences show that is indeed the inf of the subset , or in other words that , as desired.

In summary, what we did in this proof is “slide” the operator on the right of the inequality over to the operator on the left, then invoke the defining property of infs, and then slide back to on the right. This sliding trick is analogous to how adjoint mappings work in linear algebra.

In fact, everything we have done so far with posets can be translated in terms of matrix algebra, provided that our matrix entries, instead of being real or complex numbers, are truth values ( for “true”, “false”). These truth values are added and multiplied in the way familiar from truth tables, with join playing the role of addition and meet playing the role of multiplication. In fact the lattice is a very simple distributive lattice, and so most of the familiar arithmetic properties of addition and multiplication (associativity, commutativity, distributivity) do carry over, which is all we need to carry out the most basic aspects of matrix algebra. However, observe that has no additive inverse (for here ) — the type of structure we are dealing with is often called a “rig” (like a ring, but without assuming negatives). On the other hand, this lattice is, conveniently, a sup-lattice, thinking of sups as arbitrary sums, whether finitary or infinitary.

Peirce recognized that a relation can be classified by a truth-valued matrix. Take for example a binary relation on a set , i.e., a subset . We can imagine each point as a pixel in the plane, and highlight by lighting up just those pixels which belong to . This is the same as giving an -matrix , with rows indexed by elements and columns by elements , where the -entry is (on) if is in , and if not. In a similar way, any relation is classified by a -matrix whose entries are truth values.

As an example, the identity matrix has a at the -entry if and only if . Thus the identity matrix classifies the equality relation.

A poset is a set equipped with a binary relation satisfying the reflexive, transitive, and antisymmetry properties. Let us translate these into matrix algebra terms. First reflexivity: it says that implies . In matrix algebra terms, it says , which we abbreviate in the customary way:

(Reflexivity) .

Now let’s look at transitivity. It says

( and ) implies .

The “and” here refers to the meet or multiplication in the rig of truth values , and the existential quantifier can be thought of as a (possibly infinitary) join or sum indexed over elements . Thus, for each pair , the hypothesis of the implication has truth value

which is just the -entry of the square of the matrix . Therefore, transitivity can be very succinctly expressed in matrix algebra terms as the condition

(Transitivity) .

Remark: More generally, given a relation from to , and another relation from to , the relational composite is defined to be the set of pairs for which there exists with and . But this just means that its classifying matrix is the ordinary matrix product !

Let’s now look at the antisymmetry condition: ( and ) implies . The clause is the flip of ; at the matrix level, this flip corresponds to taking the transpose. Thus antisymmetry can be expressed in matrix terms as

(Antisymmetry)

where denotes the transpose of , and the matrix meet means we take the meet at each entry.

Remark: From the matrix algebra perspective, the antisymmetry axiom is less well motivated than the reflexivity and transitivity axioms. There’s a moral hiding beneath that story: from the category-theoretic perspective, the antisymmetry axiom is relatively insignificant. That is, if we view a poset as a category, then the antisymmetry condition is tantamount to the condition that isomorphic objects are equal (in the parlance, one says the category is “skeletal”) — this extra condition makes no essential difference, because isomorphic objects are essentially the same anyway. So: if we were to simply drop the antisymmetry axiom but keep the reflexivity and transitivity axioms (leading to what are called preordered sets, as opposed to partially ordered sets), then the theory of preordered sets develops exactly as the theory of partially ordered sets, except that in places where we conclude “ is equal to ” in the theory of posets, we would generally conclude “ is isomorphic to ” in the theory of preordered sets.

Preordered sets do occur in nature. For example, the set of sentences in a theory is preordered by the entailment relation ( is derivable from in the theory). (The way one gets a poset out of this is to pass to a quotient set, by identifying sentences which are logically equivalent in the theory.)

Exercises:

(For those who know some topology) Suppose is a topological space. Given , define if belongs to the closure of ; show this is a preorder. Show this preorder is a poset precisely when is a -space.

If carries a group structure, define for elements if for some integer ; show this is a preorder. When is it a poset?

Since posets or preorders are fundamental to everything we’re doing, I’m going to reserve a special pairing notation for their classifying matrices: define

if and only if .

Many of the concepts we have developed so far for posets can be succinctly expressed in terms of the pairing.

Example: The Yoneda principle (together with its dual) is simply the statement that if is a poset, then if and only if (as functionals valued in ) if and only if .

Example: A mapping from a poset to a poset is a function such that .

Example: If is a poset, its dual or opposite has the same elements but the opposite order, i.e., . The principle of duality says that the opposite of a poset is a poset. This can be (re)proved by invoking formal properties of matrix transpose, e.g., if , then .

By far the most significant concept that can be expressed in terms of these pairings that of adjoint mappings:

Definition: Let be posets [or preorders], and , be poset mappings. We say is an adjoint pair (with the left adjoint of , and the right adjoint of ) if

or, in other words, if if and only if . We write . Notice that the concept of left adjoint is dual to the concept of right adjoint (N.B.: they are not the same, because clearly the pairing is not generally symmetric in and ).

Here are some examples which illustrate the ubiquity of this concept:

Let be a poset. Let be the poset where iff ( and ). There is an obvious poset mapping , the diagonal mapping, which takes to . Then a meet operation is precisely a right adjoint to the diagonal mapping. Indeed, it says that if and only if .

Dually, a join operation is precisely a left adjoint to the diagonal mapping .

More generally, for any set , there is a diagonal map which maps to the -tuple . Its right adjoint , if one exists, sends an -tuple to the inf of the set . Its left adjoint would send the tuple to the sup of that set.

If is a Heyting algebra, then for each , the conjunction operator is left adjoint to the implication operator .

If is a sup-lattice, then the operator which sends a subset to is left adjoint to the Dedekind embedding . Indeed, we have if and only if (for all ) if and only if .

As items 1, 2, and 4 indicate, the rules for how the propositional connectives operate are governed by adjoint pairs. This gives some evidence for Lawvere’s great insight that all rules of inference in logic are expressed by interlocking pairs of adjoint mappings.

Proposition: If and where and are composable mappings, then .

Proof: . Notice that the statement is analogous to the usual rule , where refers to taking an adjoint with respect to given inner product forms.

We can use this proposition to give slick proofs of some results we’ve seen. For example, to prove that Heyting algebras are distributive lattices, i.e., that , just take left adjoints on both sides of the tautology , where is right adjoint to . The left adjoint of the left side of the tautology is (by the proposition) applied to . The left adjoint of the right side is applied to . The conclusion follows.

Much more generally, we have the

Theorem: Right adjoints preserve any infs which exist in . Dually, left adjoints preserve any sups which exist in .

Proof: where the last inf is interpreted in the inf-lattice . This equals . This completes the proof of the first statement (why?). The second follows from duality.

Exercise: If is a Heyting algebra, then there is a poset mapping for any element . Describe the left adjoint of this mapping. Conclude that this mapping takes infs in (i.e., sups in ) to the corresponding infs in .

Let’s see if we can build this from ground up. We first define a statement (or sometimes, a proposition) to be a meaningful assertion that is either true or false. Well, meaningful means we should be able to say for sure if a statement is true or false. So, something like “Hello, there!” is not counted as a statement but “the sun is made of butter” is. The latter is evidently false but the former is neither true nor false. Now, it can get quite cumbersome after a while if we keep using statements such as “the sun is made of butter” every time we need to use them. Thus, it is useful to have variables, or to be precise, propositional variables, to denote all statements. We usually prefer to use and so on for such variables.

Now, all of this would be rather boring if we had just symbols such as etc. to denote statements. Thus, a statement like “Archimedes was a philosopher” is not that interesting in itself. In fact, all the statements (in our formal system) would be “isolated” ones in the sense that we wouldn’t be able to logically “connect” one statement to another. We want to be able to express sentences like “ and “, “ implies ” and so on. So, we add something called logical connectives (also called operator symbols) to the picture. There are four basic ones: (conjunction), (disjunction), (material implication), which are all of arity 2 and (negation) which is of arity 1. Using these logical connectives, we can now form compound statements such as (i.e. and ), (i.e. or ), (i.e.), and (i.e. implies .) Note that each of and requires two propositional variables in order for it to make any sense; this is expressed by saying their arity is 2. On the other hand, has arity 1 since it is applied to exactly one propositional variable.

We also introduce another logical operator called logical equivalence (,) which has arity 2. It is really convenient to have logical equivalence on hand, as we shall see later. We say if and only if ““. What this basically means is, if is true then so is and if is true then so is . Another equivalent way of saying this is, if is true then so is and if is false then so is .

Before we proceed further, we make a few observations. First, if and are propositional variables, then by definition each of those is either true or false. Formally speaking, the truth value of or is either true or false. This is equally true of the compound statements and . Of course, the truth values of these four compound statements depend on and . We will delve into this in the next post.

Second, we don’t really need all the four basic operators. Two of those, viz. and suffice for all logical purposes. This means all statements involving and/or can be “converted” to ones that involve only and . However, we can also choose the “minimal” set , instead, for the purpose for which we chose the minimal set . In fact, there are lots of other possible combinations of operators that can serve our purpose equally well. Which minimal set of operators we choose depends sometimes on personal taste and at other times on practical considerations. So, for example, while designing circuits in the field of computer hardware, the minimal operator set that is used is . In fact, all that’s really needed is this particular operator set. Here .

So, what have we got so far? Well, we have a formal notion of a statement (or proposition.) We have access to propositional variables (, etc.) that may be used to denote statements. We know how to create the negation of a given statement using the logical connective. We also know how to “connect” any two statements using conjunction, disjunction and material implication that are symbolically represented by the logical connectives and , respectively. And, lastly, given any two statements and , we have defined what it means for the two to be logically equivalent (which is symbolically represented by ) to each other. Indeed, if and only if ().

We shall see in the later posts that the above “small” formal system (for propositional calculus) we have built thus far is, in fact, quite powerful. We can, indeed, already employ quite a bit of it in “ordinary” mathematics. But, more on this, later!