# <math>\lambda_x\phi(x) a =_X \phi(a)</math> if <math>a</math> is substitutable for <math>x</math> in <math>\phi(x)</math> and <math>\phi(a)</math> is what we get by substituting each occurrence of <math>x</math> by<math>a</math> in <math>\phi(x)</math>. ''A term is '''substitutable''' for another if by performing the substitution, no occurrence of any variable in the term becomes bound,''

# <math>\lambda_x\phi(x) a =_X \phi(a)</math> if <math>a</math> is substitutable for <math>x</math> in <math>\phi(x)</math> and <math>\phi(a)</math> is what we get by substituting each occurrence of <math>x</math> by<math>a</math> in <math>\phi(x)</math>. ''A term is '''substitutable''' for another if by performing the substitution, no occurrence of any variable in the term becomes bound,''

# <math>\lambda_{x\in A}\phi(x) =_X \lambda_{x'\in A}\phi(x')</math> if <math>x'</math> is substitutable for <math>x</math> in <math>\phi(x)</math> and each variable is not free in the other expression.

# <math>\lambda_{x\in A}\phi(x) =_X \lambda_{x'\in A}\phi(x')</math> if <math>x'</math> is substitutable for <math>x</math> in <math>\phi(x)</math> and each variable is not free in the other expression.

Line 121:

Line 118:

'''Definition''' A ''cone'' over a diagram <math>D</math> in a category <math>C</math> is some object <math>C</math> equipped with a family <math>c_i:C\to D_i</math> of arrows, one for each object in <math>J</math>, such that for each arrow <math>\alpha:i\to j</math> in <math>J</math>, the following diagram

'''Definition''' A ''cone'' over a diagram <math>D</math> in a category <math>C</math> is some object <math>C</math> equipped with a family <math>c_i:C\to D_i</math> of arrows, one for each object in <math>J</math>, such that for each arrow <math>\alpha:i\to j</math> in <math>J</math>, the following diagram

+

[[Image:ConeDefDiagram.png]]

[[Image:ConeDefDiagram.png]]

+

commutes, or in equations, <math>D_\alpha c_i = c_j</math>.

commutes, or in equations, <math>D_\alpha c_i = c_j</math>.

A ''morphism'' <math>f:(C,c_i)\to(C',c'_i)</math> of cones is an arrow <math>f:C\to C'</math> such that each triangle

A ''morphism'' <math>f:(C,c_i)\to(C',c'_i)</math> of cones is an arrow <math>f:C\to C'</math> such that each triangle

+

[[Image:ConeMorphismDiagram.png]]

[[Image:ConeMorphismDiagram.png]]

+

commutes, or in equations, such that <math>c_j = c'_j f</math>.

commutes, or in equations, such that <math>c_j = c'_j f</math>.

Line 163:

Line 164:

'''Definition''' A ''cocone'' over a diagram <math>D:J\to C</math> is an object <math>C</math> with arrows <math>c_j:D_j\to C</math> such that for each arrow <math>\alpha:i\to j</math> in <math>J</math>, the following diagram

'''Definition''' A ''cocone'' over a diagram <math>D:J\to C</math> is an object <math>C</math> with arrows <math>c_j:D_j\to C</math> such that for each arrow <math>\alpha:i\to j</math> in <math>J</math>, the following diagram

+

[[Image:CoConeDefDiagram.png]]

[[Image:CoConeDefDiagram.png]]

+

commutes, or in equations, such that <math>c_jD_\alpha=c_i</math>.

commutes, or in equations, such that <math>c_jD_\alpha=c_i</math>.

A ''morphism'' <math>f:(C,c_i)\to (C',c'_i)</math> of cocones is an arrow <math>f:C\to C'</math> such that each triangle

A ''morphism'' <math>f:(C,c_i)\to (C',c'_i)</math> of cocones is an arrow <math>f:C\to C'</math> such that each triangle

+

[[Image:CoConeMorphismDiagram.png]]

[[Image:CoConeMorphismDiagram.png]]

+

commutes, or in equations, such that <math>c_j=c'_jf</math>.

commutes, or in equations, such that <math>c_j=c'_jf</math>.

Line 191:

Line 196:

For both of these, the argument is almost identical to the one in [[User:Michiexile/MATH198/Lecture 5#Limits we've already seen|the limits section]] above.

For both of these, the argument is almost identical to the one in [[User:Michiexile/MATH198/Lecture 5#Limits we've already seen|the limits section]] above.

−

−

====Useful limits and colimits====

−

−

With the tools of limits and colimits at hand, we can start using these to introduce more category theoretical constructions - and some of these turn out to correspond to things we've seen in other areas.

−

−

Possibly among the most important are the equalizers and coequalizers (with kernel (nullspace) and images as special cases), and the pullbacks and pushouts (with which we can make explicit the idea of inverse images of functions).

−

−

One useful theorem to know about is:

−

−

'''Theorem''' The following are equivalent for a category <math>C</math>:

−

* <math>C</math> has all finite limits.

−

* <math>C</math> has all finite products and all equalizers.

−

* <math>C</math> has all pullbacks and a terminal object.

−

Also, the following dual statements are equivalent:

−

* <math>C</math> has all finite colimits.

−

* <math>C</math> has all finite coproducts and all coequalizers.

−

* <math>C</math> has all pushouts and an initial object.

−

−

For this theorem, we can replace ''finite'' with any other cardinality in every place it occurs, and we will still get a valid theorem.

−

−

=====Equalizer, coequalizer=====

−

−

Consider the ''equalizer diagram'':

−

−

[[Image:EqualizerDiagram.png]]

−

−

A limit over this diagram is an object <math>C</math> and arrows to all diagram objects. The commutativity conditions for the arrows defined force for us <math>fp_A = p_B = gp_A</math>, and thus, keeping this enforced equation in mind, we can summarize the cone diagram as:

−

−

[[Image:EqualizerCone.png]]

−

−

Now, the limit condition tells us that this is the least restrictive way we can map into <math>A</math> with some map <math>p</math> such that <math>fp = gp</math>, in that every other way we could map in that way will factor through this way.

−

−

As usual, it is helpful to consider the situation in Set to make sense of any categorical definition: and the situation there is helped by the generalized element viewpoint: the limit object <math>C</math> is one representative of a subobject of <math>A</math> that for the case of Set contains all <math>x\in A: f(x) = g(x)</math>.

−

−

Hence the word we use for this construction: the limit of the diagram above is the ''equalizer of <math>f, g</math>''. It captures the idea of a maximal subset unable to distinguish two given functions, and it introduces a categorical way to define things by equations we require them to respect.

−

−

One important special case of the equalizer is the ''kernel'': in a category with a null object, we have a distinguished, unique, member <math>0</math> of any homset given by the compositions of the unique arrows to and from the null object. We define ''the kernel'' <math>Ker(f)</math> of an arrow <math>f</math> to be the equalizer of <math>f, 0</math>. Keeping in mind the arrow-centric view on categories, we tend to denot the arrow from <math>Ker(f)</math> to the source of <math>f</math> by <math>ker(f)</math>.

−

−

In the category of vector spaces, and linear maps, the map <math>0</math> really is the constant map taking the value <math>0</math> everywhere. And the kernel of a linear map <math>f:U\to V</math> is the equalizer of <math>f,0</math>. Thus it is some vector space <math>W</math> with a map <math>i:W\to U</math> such that <math>fi = 0i = 0</math>, and any other map that fulfills this condition factors through <math>W</math>. Certainly the vector space <math>\{u\in U: f(u)=0\}</math> fulfills the requisite condition, nothing larger will do, since then the map composition wouldn't be 0, and nothing smaller will do, since then the maps factoring this space through the smaller candidate would not be unique.

Dually, we get the ''coequalizer'' as the colimit of the equalizer diagram.

−

−

A coequalizer

−

[[Image:CoequalizerCone.png]]

−

has to fulfill that <math>i_Bf = i_A = i_Bg</math>. Thus, writing <math>q=i_B</math>, we get an object (actually, an epimorphism out of <math>B</math>) with a map such that <math>f</math> and <math>g</math> both hit things that <math>q</math> identifies.

−

−

So, assuming we work with sets, we find that what a coequalizer of two functions is, is a projection onto a quotient set, such that <math>i_A(x) = i_A(y)</math> exactly if <math>f(x) = f(y)</math>.

−

−

=====Pushout and pullback squares=====

−

−

* Computer science applications

===Homework===

===Homework===

+

Credit will be given for up to 4 of the 6 exercises.

# Prove that currying/uncurrying are isomorphisms in a CCC. Hint: the map <math>f\mapsto\lambda f</math> is a map <math>Hom(C\times A, B)\to Hom(C,[A\to B])</math>.

# Prove that currying/uncurrying are isomorphisms in a CCC. Hint: the map <math>f\mapsto\lambda f</math> is a map <math>Hom(C\times A, B)\to Hom(C,[A\to B])</math>.

A category is said to have pairwise products if for any objects A,B, there is a product object .

A category is said to have pairwise coproducts if for any objects A,B, there is a coproduct object A + B.

Recall when we talked about internal homs in Lecture 2. We can now define what we mean, formally, by the concept:

Definition An object C in a category D is an internal hom object or an exponential object or BA if it comes equipped with an arrow , called the evaluation arrow, such that for any other arrow , there is a unique arrow such that the composite

is f.

The idea here is that with something in an exponential object, and something in the source of the arrows we imagine live inside the exponential, we can produce the evaluation of the arrow at the source to produce something in the target. Using global elements, this reasoning comes through in a more natural manner: given and we can produce the global element . Furthermore, we can always produce something in the exponential whenever we have something that looks as if it should be there.

And with this we can define

Definition A category C is a Cartesian Closed Category or a CCC if:

C has a terminal object 1

Each pair of objects has a product and projections , .

For every pair of objects, there is an exponential object with an evaluation map .

Note that the exponential as described here is exactly what we need in order to discuss the Haskell concept of multi-parameter functions. If we consider the type of a binary function in Haskell:

binFunction :: a -> a -> a

This function really lives in the Haskell type

a ->(a -> a)

, and thus is an element in the repeated exponential object . Evaluating once gives us a single-parameter function, the first parameter consumed by the first evaluation, and we can evaluate a second time, feeding in the second parameter to get an end result from the function.

On the other hand, we can feed in both values at once, and get

binFunction' :: (a,a) -> a

which lives in the exponential object .

These are genuinely different objects, but they seem to do the same thing: consume two distinct values to produce a third value. The resolution of the difference lies, again, in a recognition from Set theory: there is an isomorphism

which we can use as inspiration for an isomorphism valid in Cartesian Closed Categories.

The lambda-calculus, and later the typed lambda-calculus both act as foundational bases for computer science, and computer programming in particular. The idea in both is that everything is a function, and we can reduce the act of programming to function application; which in turn can be analyzed using expression rewriting rules that encapsulate the act of computation in a sequence of formal rewrites.

Definition A typed lambda-calculus is a formal theory with types, terms, variables and equations. Each term a has a type A associated to it, and we write a:A or . The system is subject to a sequence of rules:

There is a type 1. Hence, the empty lambda calculus is excluded.

If A,B are types, then so are and . These are, initially, just additional symbols, not imbued with the associations we usually give the symbols used.

There is a term * :1. Hence, the lambda calculus without any terms is excluded.

For each type A, there is an infinite (countable) supply of terms .

If a:A,b:B are terms, then there is a term .

If then there are terms proj1(c):A,proj2(c):B.

If a:A And , then there is a term fa:B.

If x:A is a variable and φ(x):B is a term, then there is a . Note that here, φ(x) is a meta-expression, meaning we have SOME lambda-calculus expression that may include the variable x.

There is a relation a = Xa' for each set of variables X that occur freely in either a or a'. This relation is reflexive, symmetric and transitive. Recall that a variable is free in a term if it is not in the scope of a λ-expression naming that variable.

If a:1 then a = {} * . In other words, up to lambda-calculus equality, there is only one value of type * .

If , then a = Xa' implies a = Ya'. Binding more variables gives less freedom, not more, and thus cannot suddenly make equal expressions differ.

λxφ(x)a = Xφ(a) if a is substitutable for x in φ(x) and φ(a) is what we get by substituting each occurrence of x bya in φ(x). A term is substitutable for another if by performing the substitution, no occurrence of any variable in the term becomes bound,

, provided .

if x' is substitutable for x in φ(x) and each variable is not free in the other expression.

Note that = X is just a symbol. The axioms above give it properties that work a lot like equality, but two lambda calculus-equal terms are not equal unless they are identical. However, a = Xb tells us that in any model of this lambda calculus - where terms, types, et.c. are replaced with actual things (mathematical objects, say, or a programming language semantics embedding typed lambda calculus) - then the things given by translating a and b into the model should end up being equal.

Any actual realization of typed lambda calculus is bound to have more rules and equalities than the ones listed here.

With these axioms in front of us, however, we can see how lambda calculus and Cartesian Closed Categories fit together: We can go back and forth between the wo concepts in a natural manner:

To go in the other direction, starting out with a Cartesian Closed Category and finding a typed lambda calculus corresponding to it, we construct its internal language.

Given a CCC C, we can assume that we have chosen, somehow, one actual product for each finite set of factors. Thus, both all products and all projections are well defined entities, with no remaining choice to determine them.

The types of the internal language L(C) are just the objects of C. The existence of products, exponentials and terminal object covers axioms 1-2. We can assume the existence of variables for each type, and the remaining axioms correspond to definition and behaviour of the terms available.

Using the properties of a CCC, it is at this point possible to prove a resulting equivalence of categories C(L(C)) = C, and similarly, with suitable definitions for what it means for formal languages to be equivalent, one can also prove for a typed lambda-calculus L that L(C(L)) = L.

More on this subject can be found in:

Lambek & Scott: Aspects of higher order categorical logic and Introduction to higher order categorical logic

More importantly, by stating λ-calculus in terms of a CCC instead of in terms of terms and rewriting rules is that you can escape worrying about variable clashes, alpha reductions and composability - the categorical translation ignores, at least superficially, the variables, reduces terms with morphisms that have equality built in, and provides associative composition for free.

At this point, I'd recommend reading more on Wikipedia [1] and [2], as well as in Lambek & Scott: Introduction to Higher Order Categorical Logic. The book by Lambek & Scott goes into great depth on these issues, but may be less than friendly to a novice.

One design pattern, as it were, that we have seen occur over and over in the definitions we've seen so far is for there to be some object, such that for every other object around, certain morphisms have unique existence.

We saw it in terminal and initial objects, where there's a unique map from or to every other object. And in products/coproducts where a wellbehaved map, capturing any pair of maps has unique existence. And finally, above, in the CCC characterization of the internal hom, we had a similar uniqueness requirement for the lambda map.

One thing we can notice is that the isomorphisms theorems for all these cases look very similar to each other: in each isomorphism proof, we produce the uniquely existing morphisms, and prove that their uniqueness and their other properties force the maps to really be isomorphisms.

Now, category theory has a philosophy slightly similar to design patterns - if we see something happening over and over, we'll want to generalize it. And there are generalizations available for these!

Definition A diagramD of the shape of an index category J (often finite or countable), in a category C is just a functor . Objects in J will be denoted by i,j,k,... and their images in C by Di,Dj,Dk,....

This underlines that when we talk about diagrams, we tend to think of them less as just functors, and more as their images - the important part of a diagram D is the objects and their layout in C, and not the process of going to C from D.

Definition A cone over a diagram D in a category C is some object C equipped with a family of arrows, one for each object in J, such that for each arrow in J, the following diagram

commutes, or in equations, Dαci = cj.

A morphism of cones is an arrow such that each triangle

commutes, or in equations, such that cj = c'jf.

This defins a category of cones, that we shall denote by Cone(D). And we define, hereby:

Definition The limit of a diagram D in a category C is a terminal object in Cone(D). We often denote a limit by

so that the map from the limit object to one of the diagram objects Di is denoted by pi.

The limit being terminal in the category of cones nails down once and for all the uniqueness of any map into it, and the isomorphism of any two terminal objects carries over to a proof once and for all for the limit case.

Specifically, since the morphisms of cones are morphisms in C, and composition is carried straight over, so proving a map is an isomorphism in the cone category implies it is one in the target category as well.

Definition A category Chas all (finite) limits if all diagrams (of finite shape) have limit objects defined for them.

The terminal object of a category is the limit object of an empty diagram. Indeed, it is an object, with no specified maps to no other objects, such that every other object that also maps to the same empty set of objects - which is to say all other objects - have a uniquely determined map to the limit object.

The product of some set of objects it she limit object of the diagram containing all these objects and no arrows; a diagram of the shape of a discrete category. The condition here becomes the requirement of maps to all factors so any other cone factors through these maps.

To express the exponential as a limit, we need to go to a different category than the one we started in. Take the category with objects given by morphisms for fixed objects Y,Z, and morphisms given by morphisms commuting with the 'objects' they run between and fixing Y. The exponential is a terminal object in this category.

Adding further arrows to diagrams amounts to adding further conditions on the products, as the maps from the product to the diagram objects need to factor through any arrows present in the diagram.

These added relations, however, is exactly what trips things up in Haskell. The idealized Haskell category does not have even all finite limits. At the core of the issue here is the lack of dependent types: there is no way for the type system to guarantee equations, and hence only the trivial limits - the products - can be guaranteed by the Haskell type checked.

In order to get that kind of guarantees, the type checker would need an implementation of Dependent type, something that can be simulated in several ways, but is not (yet) an actual part of Haskell. Other languages, however, cover this - most notably Epigram, Agda and Cayenne - which the latter is much stronger influenced by constructve type theory and category theory even than Haskell.

The kind of equations that show up in a limit, however, could be thought of as invariants for the type - and thus something that can be tested for. The resulting equations can be plugged into a testing framework - such as QuickCheck to verify that the invariants hold under the functions applied.