which is natural in the variables X and Y. The functor F is called a left adjoint functor, while G is called a right adjoint functor. The relationship “F is left adjoint to G” (or equivalently, “G is right adjoint to F”) is sometimes written

The long list of examples in this article is only a partial indication of how often an interesting mathematical construction is an adjoint functor. As a result, general theorems about left/right adjoint functors, such as the equivalence of their various definitions or the fact that they respectively preserve colimits/limits (which are also found in every area of mathematics), can encode the details of many useful and otherwise non-trivial results.

It can be said that an adjoint functor is a way of giving the most efficient solution to some problem via a method which is formulaic. For example, an elementary problem in ring theory is how to turn a rng (which is like a ring that might not have a multiplicative identity) into a ring. The most efficient way is to adjoin an element '1' to the rng, adjoin all (and only) the elements which are necessary for satisfying the ring axioms (e.g. r+1 for each r in the ring), and impose no relations in the newly formed ring that are not forced by axioms. Moreover, this construction is formulaic in the sense that it works in essentially the same way for any rng.

This is rather vague, though suggestive, and can be made precise in the language of category theory: a construction is most efficient if it satisfies a universal property, and is formulaic if it defines a functor. Universal properties come in two types: initial properties and terminal properties. Since these are dual (opposite) notions, it is only necessary to discuss one of them.

The idea of using an initial property is to set up the problem in terms of some auxiliary category E, and then identify that what we want is to find an initial object of E. This has an advantage that the optimization — the sense that we are finding the most efficient solution — means something rigorous and is recognisable, rather like the attainment of a supremum. Picking the right category E is something of a knack: for example, take the given rng R, and make a category E whose objects are rng homomorphisms R → S, with S a ring having a multiplicative identity. The morphisms in E between R → S1 and R → S2 are commutative triangles of the form (R → S1,R → S2, S1 → S2) where S1 → S2 is a ring map (which preserves the identity). The existence of a morphism between R → S1 and R → S2 implies that S1 is at least as efficient a solution as S2 to our problem: S2 can have more adjoined elements and/or more relations not imposed by axioms than S1. Therefore, the assertion that an object R → R* is initial in E, that is, that there is a morphism from it to any other element of E, means that the ring R* is a most efficient solution to our problem.

The two facts that this method of turning rngs into rings is most efficient and formulaic can be expressed simultaneously by saying that it defines an adjoint functor.

Continuing this discussion, suppose we started with the functor F, and posed the following (vague) question: is there a problem to which F is the most efficient solution?

The notion that F is the most efficient solution to the problem posed by G is, in a certain rigorous sense, equivalent to the notion that G poses the most difficult problem that F solves.[citation needed]

This has the intuitive meaning that adjoint functors should occur in pairs, and in fact they do, but this is not trivial from the universal morphism definitions. The equivalent symmetric definitions involving adjunctions and the symmetric language of adjoint functors (we can say either F is left adjoint to G or G is right adjoint to F) have the advantage of making this fact explicit.

There are various definitions for adjoint functors. Their equivalence is elementary but not at all trivial and in fact highly useful. This article provides several such definitions:

The definitions via universal morphisms are easy to state, and require minimal verifications when constructing an adjoint functor or proving two functors are adjoint. They are also the most analogous to our intuition involving optimizations.

The definition via counit-unit adjunction is convenient for proofs about functors which are known to be adjoint, because they provide formulas that can be directly manipulated.

The definition via hom-sets makes symmetry the most apparent, and is the reason for using the word adjoint.

Adjoint functors arise everywhere, in all areas of mathematics. Their full usefulness lies in that the structure in any of these definitions gives rise to the structures in the others via a long but trivial series of deductions. Thus, switching between them makes implicit use of a great deal of tedious details that would otherwise have to be repeated separately in every subject area. For example, naturality and terminality of the counit can be used to prove that any right adjoint functor preserves limits.

The theory of adjoints has the terms left and right at its foundation, and there are many components which live in one of two categories C and D which are under consideration. It can therefore be extremely helpful to choose letters in alphabetical order according to whether they live in the "lefthand" category C or the "righthand" category D, and also to write them down in this order whenever possible.

In this article for example, the letters X, F, f, ε will consistently denote things which live in the category C, the letters Y, G, g, η will consistently denote things which live in the category D, and whenever possible such things will be referred to in order from left to right (a functor F:C←D can be thought of as "living" where its outputs are, in C).

A functor F : C ← D is a left adjoint functor if for each object X in C, there exists a terminal morphism from F to X. If, for each object X in C, we choose an object G0X of D for which there is a terminal morphism εX : F(G0X) → X from F to X, then there is a unique functor G : C → D such that GX = G0X and εXʹ ∘ FG(f) = f ∘ εX for f : X → Xʹ a morphism in C; F is then called a left adjoint toG.

A functor G : C → D is a right adjoint functor if for each object Y in D, there exists an initial morphism from Y to G. If, for each object Y in D, we choose an object F0Y of C and an initial morphism ηY : Y → G(F0Y) from Y to G, then there is a unique functor F : C ← D such that FY = F0Y and GF(g) ∘ ηY = ηYʹ ∘ g for g : Y → Yʹ a morphism in D; G is then called a right adjoint toF.

Remarks:

It is true, as the terminology implies, that F is left adjoint to G if and only if G is right adjoint to F. This is apparent from the symmetric definitions given below. The definitions via universal morphisms are often useful for establishing that a given functor is left or right adjoint, because they are minimalistic in their requirements. They are also intuitively meaningful in that finding a universal morphism is like solving an optimization problem.

respectively called the counit and the unit of the adjunction (terminology from universal algebra), such that the compositions

are the identity transformations 1F and 1G on F and G respectively.

In this situation we say that F is left adjoint to G and G is right adjoint to F , and may indicate this relationship by writing , or simply .

In equation form, the above conditions on (ε,η) are the counit-unit equations

which mean that for each X in C and each Y in D,

.

Note that here denotes identity morphisms, while above the same symbol was used for identity functors.

These equations are useful in reducing proofs about adjoint functors to algebraic manipulations. They are sometimes called the zig-zag equations because of the appearance of the corresponding string diagrams. A way to remember them is to first write down the nonsensical equation and then fill in either F or G in one of the two simple ways which make the compositions defined.

Note: The use of the prefix "co" in counit here is not consistent with the terminology of limits and colimits, because a colimit satisfies an initial property whereas the counit morphisms will satisfy terminal properties, and dually. The term unit here is borrowed from the theory of monads where it looks like the insertion of the identity 1 into a monoid.

In this situation we say that F is left adjoint to G and G is right adjoint to F , and may indicate this relationship by writing , or simply .

This definition is a logical compromise in that it is somewhat more difficult to satisfy than the universal morphism definitions, and has fewer immediate implications than the counit-unit definition. It is useful because of its obvious symmetry, and as a stepping-stone between the other definitions.

In order to interpret Φ as a natural isomorphism, one must recognize homC(F–, –) and homD(–, G–) as functors. In fact, they are both bifunctors from Dop × C to Set (the category of sets). For details, see the article on hom functors. Explicitly, the naturality of Φ means that for all morphismsf : X → X′ in C and all morphisms g : Y′ → Y in D the following diagram commutes:

The vertical arrows in this diagram are those induced by composition with f and g. Formally, Hom(Fg, f) : HomC(FY, X) → HomC(FY′, X′) is given by h → f o h o Fg for each h in HomC(FY, X). Hom(g, Gf) is similar.

In particular, the equations above allow one to define Φ, ε, and η in terms of any one of the three. However, the adjoint functors F and G alone are in general not sufficient to determine the adjunction. We will demonstrate the equivalence of these situations below.

A similar argument allows one to construct a hom-set adjunction from the terminal morphisms to a left adjoint functor. (The construction that starts with a right adjoint is slightly more common, since the right adjoint in many adjoint pairs is a trivially defined inclusion or forgetful functor.)

The idea of an adjoint functor was formulated by Daniel Kan in 1958. Like many of the concepts in category theory, it was suggested by the needs of homological algebra, which was at the time devoted to computations. Those faced with giving tidy, systematic presentations of the subject would have noticed relations such as

hom(F(X), Y) = hom(X, G(Y))

in the category of abelian groups, where F was the functor (i.e. take the tensor product with A), and G was the functor hom(A,–). The use of the equals sign is an abuse of notation; those two groups are not really identical but there is a way of identifying them that is natural. It can be seen to be natural on the basis, firstly, that these are two alternative descriptions of the bilinear mappings from X × A to Y. That is, however, something particular to the case of tensor product. In category theory the 'naturality' of the bijection is subsumed in the concept of a natural isomorphism.

The terminology comes from the Hilbert space idea of adjoint operatorsT, U with , which is formally similar to the above relation between hom-sets. We say that F is left adjoint to G, and G is right adjoint to F. Note that G may have itself a right adjoint that is quite different from F (see below for an example). The analogy to adjoint maps of Hilbert spaces can be made precise in certain contexts.[1]

If one starts looking for these adjoint pairs of functors, they turn out to be very common in abstract algebra, and elsewhere as well. The example section below provides evidence of this; furthermore, universal constructions, which may be more familiar to some, give rise to numerous adjoint pairs of functors.

In accordance with the thinking of Saunders Mac Lane, any idea such as adjoint functors that occurs widely enough in mathematics should be studied for its own sake.[citation needed]

Mathematicians do not generally need the full adjoint functor concept. Concepts can be judged according to their use in solving problems, as well as for their use in building theories. The tension between these two motivations was especially great during the 1950s when category theory was initially developed. Enter Alexander Grothendieck, who used category theory to take compass bearings in other work — in functional analysis, homological algebra and finally algebraic geometry.

It is probably wrong to say that he promoted the adjoint functor concept in isolation: but recognition of the role of adjunction was inherent in Grothendieck's approach. For example, one of his major achievements was the formulation of Serre duality in relative form — loosely, in a continuous family of algebraic varieties. The entire proof turned on the existence of a right adjoint to a certain functor. This is something undeniably abstract, and non-constructive, but also powerful in its own way.

Every partially ordered set can be viewed as a category (with a single morphism between x and y if and only if x ≤ y). A pair of adjoint functors between two partially ordered sets is called a Galois connection (or, if it is contravariant, an antitone Galois connection). See that article for a number of examples: the case of Galois theory of course is a leading one. Any Galois connection gives rise to closure operators and to inverse order-preserving bijections between the corresponding closed elements.

As is the case for Galois groups, the real interest lies often in refining a correspondence to a duality (i.e. antitone order isomorphism). A treatment of Galois theory along these lines by Kaplansky was influential in the recognition of the general structure here.

The partial order case collapses the adjunction definitions quite noticeably, but can provide several themes:

adjunctions may not be dualities or isomorphisms, but are candidates for upgrading to that status

a very general comment of William Lawvere[2] is that syntax and semantics are adjoint: take C to be the set of all logical theories (axiomatizations), and D the power set of the set of all mathematical structures. For a theory T in C, let F(T) be the set of all structures that satisfy the axioms T; for a set of mathematical structures S, let G(S) be the minimal axiomatization of S. We can then say that F(T) is a subset of S if and only if T logically implies G(S): the "semantics functor" F is left adjoint to the "syntax functor" G.

Suppose that F : Grp ← Set is the functor assigning to each set Y the free group generated by the elements of Y, and that G : Grp → Set is the forgetful functor, which assigns to each group X its underlying set. Then F is left adjoint to G:

Terminal morphisms. For each group X, the group FGX is the free group generated freely by GX, the elements of X. Let be the group homomorphism which sends the generators of FGX to the elements of X they correspond to, which exists by the universal property of free groups. Then each is a terminal morphism from F to X, because any group homomorphism from a free group FZ to X will factor through via a unique set map from Z to GX. This means that (F,G) is an adjoint pair.

Initial morphisms. For each set Y, the set GFY is just the underlying set of the free group FY generated by Y. Let be the set map given by "inclusion of generators". Then each is an initial morphism from Y to G, because any set map from Y to the underlying set GW of a group will factor through via a unique group homomorphism from FY to W. This also means that (F,G) is an adjoint pair.

Hom-set adjunction. Maps from the free group FY to a group X correspond precisely to maps from the set Y to the set GX: each homomorphism from FY to X is fully determined by its action on generators. One can verify directly that this correspondence is a natural transformation, which means it is a hom-set adjunction for the pair (F,G).

Counit-unit adjunction. One can also verify directly that ε and η are natural. Then, a direct verification that they form a counit-unit adjunction is as follows:

The first counit-unit equation says that for each set Y the composition

should be the identity. The intermediate group FGFY is the free group generated freely by the words of the free group FY. (Think of these words as placed in parentheses to indicate that they are independent generators.) The arrow is the group homomorphism from FY into FGFY sending each generator y of FY to the corresponding word of length one (y) as a generator of FGFY. The arrow is the group homomorphism from FGFY to FY sending each generator to the word of FY it corresponds to (so this map is "dropping parentheses"). The composition of these maps is indeed the identity on FY.

The second counit-unit equation says that for each group X the composition

should be the identity. The intermediate set GFGX is just the underlying set of FGX. The arrow is the "inclusion of generators" set map from the set GX to the set GFGX. The arrow is the set map from GFGX to GX which underlies the group homomorphism sending each generator of FGX to the element of X it corresponds to ("dropping parentheses"). The composition of these maps is indeed the identity on GX.

Free objects are all examples of a left adjoint to a forgetful functor which assigns to an algebraic object its underlying set. These algebraic free functors have generally the same description as in the detailed description of the free group situation above.

Products, fibred products, equalizers, and kernels are all examples of the categorical notion of a limit. Any limit functor is right adjoint to a corresponding diagonal functor (provided the category has the type of limits in question), and the counit of the adjunction provides the defining maps from the limit object (i.e. from the diagonal functor on the limit, in the functor category). Below are some specific examples.

Products Let Π : Grp2 → Grp the functor which assigns to each pair (X1, X2) the product group X1×X2, and let Δ : Grp2 ← Grp be the diagonal functor which assigns to every group X the pair (X, X) in the product category Grp2. The universal property of the product group shows that Π is right-adjoint to Δ. The counit of this adjunction is the defining pair of projection maps from X1×X2 to X1 and X2 which define the limit, and the unit is the diagonal inclusion of a group X into X1×X2 (mapping x to (x,x)).

The cartesian product of sets, the product of rings, the product of topological spaces etc. follow the same pattern; it can also be extended in a straightforward manner to more than just two factors. More generally, any type of limit is right adjoint to a diagonal functor.

Kernels. Consider the category D of homomorphisms of abelian groups. If f1 : A1 → B1 and f2 : A2 → B2 are two objects of D, then a morphism from f1 to f2 is a pair (gA, gB) of morphisms such that gBf1 = f2gA. Let G : D → Ab be the functor which assigns to each homomorphism its kernel and let F : D ← Ab be the functor which maps the group A to the homomorphism A → 0. Then G is right adjoint to F, which expresses the universal property of kernels. The counit of this adjunction is the defining embedding of a homomorphism's kernel into the homomorphism's domain, and the unit is the morphism identifying a group A with the kernel of the homomorphism A → 0.

A suitable variation of this example also shows that the kernel functors for vector spaces and for modules are right adjoints. Analogously, one can show that the cokernel functors for abelian groups, vector spaces and modules are left adjoints.

Coproducts, fibred coproducts, coequalizers, and cokernels are all examples of the categorical notion of a colimit. Any colimit functor is left adjoint to a corresponding diagonal functor (provided the category has the type of colimits in question), and the unit of the adjunction provides the defining maps into the colimit object. Below are some specific examples.

Coproducts. If F : Ab ← Ab2 assigns to every pair (X1, X2) of abelian groups their direct sum, and if G : Ab → Ab2 is the functor which assigns to every abelian group Y the pair (Y, Y), then F is left adjoint to G, again a consequence of the universal property of direct sums. The unit of this adjoint pair is the defining pair of inclusion maps from X1 and X2 into the direct sum, and the counit is the additive map from the direct sum of (X,X) to back to X (sending an element (a,b) of the direct sum to the element a+b of X).

Adjoining an identity to a rng. This example was discussed in the motivation section above. Given a rng R, a multiplicative identity element can be added by taking RxZ and defining a Z-bilinear product with (r,0)(0,1) = (0,1)(r,0) = (r,0), (r,0)(s,0) = (rs,0), (0,1)(0,1) = (0,1). This constructs a left adjoint to the functor taking a ring to the underlying rng.

Ring extensions. Suppose R and S are rings, and ρ : R → S is a ring homomorphism. Then S can be seen as a (left) R-module, and the tensor product with S yields a functor F : R-Mod → S-Mod. Then F is left adjoint to the forgetful functor G : S-Mod → R-Mod.

Tensor products. If R is a ring and M is a right R module, then the tensor product with M yields a functor F : R-Mod → Ab. The functor G : Ab → R-Mod, defined by G(A) = homZ(M,A) for every abelian group A, is a right adjoint to F.

From monoids and groups to rings The integral monoid ring construction gives a functor from monoids to rings. This functor is left adjoint to the functor that associates to a given ring its underlying multiplicative monoid. Similarly, the integral group ring construction yields a functor from groups to rings, left adjoint to the functor that assigns to a given ring its group of units. One can also start with a fieldK and consider the category of K-algebras instead of the category of rings, to get the monoid and group rings over K.

Field of fractions. Consider the category Domm of integral domains with injective morphisms. The forgetful functor Field → Domm from fields has a left adjoint - it assigns to every integral domain its field of fractions.

Polynomial rings. Let Ring* be the category of pointed commutative rings with unity (pairs (A,a) where A is a ring, and morphisms preserve the distinguished elements). The forgetful functor G:Ring* → Ring has a left adjoint - it assigns to every ring R the pair (R[x],x) where R[x] is the polynomial ring with coefficients from R.

The Grothendieck group. In K-theory, the point of departure is to observe that the category of vector bundles on a topological space has a commutative monoid structure under direct sum. One may make an abelian group out of this monoid, the Grothendieck group, by formally adding an additive inverse for each bundle (or equivalence class). Alternatively one can observe that the functor that for each group takes the underlying monoid (ignoring inverses) has a left adjoint. This is a once-for-all construction, in line with the third section discussion above. That is, one can imitate the construction of negative numbers; but there is the other option of an existence theorem. For the case of finitary algebraic structures, the existence by itself can be referred to universal algebra, or model theory; naturally there is also a proof adapted to category theory, too.

A functor with a left and a right adjoint. Let G be the functor from topological spaces to sets that associates to every topological space its underlying set (forgetting the topology, that is). G has a left adjoint F, creating the discrete space on a set Y, and a right adjoint H creating the trivial topology on Y.

Direct and inverse images of sheaves Every continuous mapf : X → Y between topological spaces induces a functor f∗ from the category of sheaves (of sets, or abelian groups, or rings...) on X to the corresponding category of sheaves on Y, the direct image functor. It also induces a functor f−1 from the category of sheaves of abelian groups on Y to the category of sheaves of abelian groups on X, the inverse image functor. f−1 is left adjoint to f∗. Here a more subtle point is that the left adjoint for coherent sheaves will differ from that for sheaves (of sets).

Soberification. The article on Stone duality describes an adjunction between the category of topological spaces and the category of sober spaces that is known as soberification. Notably, the article also contains a detailed description of another adjunction that prepares the way for the famous duality of sober spaces and spatial locales, exploited in pointless topology.

A series of adjunctions. The functor π0 which assigns to a category its set of connected components is left-adjoint to the functor D which assigns to a set the discrete category on that set. Moreover, D is left-adjoint to the object functor U which assigns to each category its set of objects, and finally U is left-adjoint to A which assigns to each set the indiscrete category on that set.

quantification Any morphism f : X → Y in a category with pullbacks induces a monotonous map acting by pullbacks (A monotonous map is a functor if we consider the preorders as categories). If this functor has a left/right adjoint, the adjoint is called and , respectively.[3]

In the category of sets, if we choose subsets as the canonical subobjects, then these functions are given by:

Not every functor G : C → D admits a left adjoint. If C is a complete category, then the functors with left adjoints can be characterized by the adjoint functor theorem of Peter J. Freyd: G has a left adjoint if and only if it is continuous and a certain smallness condition is satisfied: for every object Y of D there exists a family of morphisms

fi : Y → G(Xi)

where the indices i come from a setI, not a proper class, such that every morphism

h : Y → G(X)

can be written as

h = G(t) o fi

for some i in I and some morphism

t : Xi → X in C.

An analogous statement characterizes those functors with a right adjoint.

The most important property of adjoints is their continuity: every functor that has a left adjoint (and therefore is a right adjoint) is continuous (i.e. commutes with limits in the category theoretical sense); every functor that has a right adjoint (and therefore is a left adjoint) is cocontinuous (i.e. commutes with colimits).

Since many common constructions in mathematics are limits or colimits, this provides a wealth of information. For example:

applying a right adjoint functor to a product of objects yields the product of the images;

applying a left adjoint functor to a coproduct of objects yields the coproduct of the images;

As stated earlier, an adjunction between categories C and D gives rise to a family of universal morphisms, one for each object in C and one for each object in D. Conversely, if there exists a universal morphism to a functor G : C → D from every object of D, then G has a left adjoint.

However, universal constructions are more general than adjoint functors: a universal construction is like an optimization problem; it gives rise to an adjoint pair if and only if this problem has a solution for every object of D (equivalently, every object of C).

If a functor F: C→D is one half of an equivalence of categories then it is the left adjoint in an adjoint equivalence of categories, i.e. an adjunction whose unit and counit are isomorphisms.

Every adjunction 〈F, G, ε, η〉 extends an equivalence of certain subcategories. Define C1 as the full subcategory of C consisting of those objects X of C for which εX is an isomorphism, and define D1 as the full subcategory of D consisting of those objects Y of D for which ηY is an isomorphism. Then F and G can be restricted to D1 and C1 and yield inverse equivalences of these subcategories.

In a sense, then, adjoints are "generalized" inverses. Note however that a right inverse of F (i.e. a functor G such that FG is naturally isomorphic to 1D) need not be a right (or left) adjoint of F. Adjoints generalize two-sided inverses.

is just the unit η of the adjunction and the multiplication transformation

is given by μ = GεF. Dually, the triple 〈FG, ε, FηG〉 defines a comonad in C.

Every monad arises from some adjunction—in fact, typically from many adjunctions—in the above fashion. Two constructions, called the category of Eilenberg–Moore algebras and the Kleisli category are two extremal solutions to the problem of constructing an adjunction that gives rise to a given monad.