Category: group-theory

In this post, we introduce the group algebra as another way to view representations and illustrate the usefulness of this approach by studying representations of cyclic groups by elementary ring theory. This is the second part of a series that started with this post. The numbering is consecutive, i.e. when I refer to some result 1.x, then this is written in that post.

A Review of Modules in Linear Algebra

We will begin by reviewing what modules over the polynomial ring mean in terms of linear algebra, as this will be helpful for motivating the module-theoretic perspective on representations.

Let be a field and be a (say finite-dimensional) vector space over and let be a -linear endomorphism of (so after choosing a basis, we can think of as a square matrix.) Suppose we wish to understand , e.g. find a basis such that has a particularly nice matrix representation with respect to that basis.

From the pair , we can define a -module structure on by defining -scalar multiplication via , where means that we apply to times.

Conversely, given a -module , we can think of it as a -vector space by restricting the scalar multiplication to . We also get a -linear endomorphism of given by multiplication with .

These constructions are inverse to each other: Going from a pair to the associated -module, multiplication with is precisely the endomorphism we started with.
For a -module, because every polynomial is a linear combination of powers of , we only need to know -scalar multiplication and how acts to reconstruct the -scalar multiplication.

Thus, we can think of pairs of vector spaces equipped with an endomorphism as -modules and we can translate between notions for endomorphisms and notions for .

Here’s an excerpt of a possible dictionary one might use for translation:

Pairs of vector spaces and endomorphisms

-modules

Subspaces that are invariant under , i.e.

-submodules

For pairs , , a -linear map such that

-linear maps

For pairs , , a -linear isomorphism such that

-linear isomorphisms

Eigenspace of associated to

The submodule of consisting of all elements annihilated by

The minimal polynomial of

The unique monic generator of the annihilator ideal associated to , i.e. the unique monic polynomial of minimal possible degree such that for all

One could add many more rows.

The important part for finding e.g. a nice basis for is the third row: If we can find a module that is isomorphic to the -module associated to such that we write down a basis such that we get a nice matrix representation for multiplication with , then the third row tells us that this is also a possible matrix representation for !

Now as is a PID, finitely generated modules over (in particular those modules that are finite-dimensional over ) are very well-understood. There’s a structure theorem that tells us that they are finite direct sums of modules of the form , where . (One can also put some conditions of to make them unique.) From there, one can easily deduce existence and uniqueness of canonical forms such as the Jordan normal form, the Frobenius normal form, and also properties of the minimal polynomial such as Cayley-Hamilton.

The Group Algebra

Let be a group and be a representation of over a field . By abuse of notation, we denote the action of an element on a vector by . Let , and , then there seems to be only one sensible way to define what we mean by , clearly, this has to be if any sensible rules hold.
But what is the expression supposed to mean? After all, we can’t just add an element of a group and an element of a field.

Or can we?

Definition 2.1 Let be a group (we denote the neutral element by ) and be a field, then the group algebra is defined as the vector space over freely generated by the elements of . We denote the elements of the basis corresponding to elements in by the same symbols. Group multiplication defines a multiplication of basis elements that we extend linearly in each argument. This defines a multiplication that extends the multiplication of and makes into a -algebra.

We leave the verification of the ring axioms to the reader. Intuitively, the group algebra consists of finite formal linear combinations of elements in , i.e. we can write them as where all but finitely many coefficients vanish. To compute a product of two such expressions, we expand using distributivity and then use the group multiplication to multiply the products of the basis vectors.

Lemma 2.2 For every representation of over , there is a unique way to define a -module structure that extends the given group action and -scalar multiplication, conversely, every -module gives rise to a representation in a canonical way. Using this identification, a morphism of representations corresponds precisely to a -linear map.

Proof Given a representation , and an element (this means the sum is finite), the only possible way to define such that the module axioms hold and acts via the given scalar multiplication and acts via the group action is to set .
Note that the RHS is just defined in terms of the group action and the vector space structure. One checks that this defines a module structure, using the linearity of the group action.
Conversely, if we have a -module , we can turn into a -vector space by restricting scalars to the subring .
We can also restrict the scalar multiplication to the subset . Associativitiy and unitality in the axioms for a module imply that this defines a group action.
Finally, group action and vector space operations are compatible due the equality in and associativity and distributivity of the scalar multiplication.
The correspondence of morphisms of representations and -linear maps follows by similar arguments: the idea is that every element in is a -linear combination of elements in , so it’s enough that a map commutes with -scalar multiplication and the -action to see that it preverses -scalar multiplication.

The relation between the description of linear-algebraic objects as -modules and representations as -modules is as follows:
In the former, we looked at any endomorphism without any condition, but only at one endomorphism at a time, that’s why the -algebra of choice to describe such an object is the polynomial algebra which is generated freely by one element , i.e. we don’t impose any relation.
For group representations, we consider many endomorphisms (actually automorphisms) at once, subject to all the relations that hold in the group . That’s why isn’t necessarily generated by one element and by inheriting the multiplication from , also inherits all the relations between elements in .

With lemma 2.2, we have added another characterization of representations to our collection (cf. lemma 1.3).
If one is really careful with the constructions in the lemma, one sees that it defines an isomorphism of categories which is just a formalization of the inuition that representations and -modules are exactly the same, just with a different point of view.

Let’s also mention the universal property of the group algebra, which can be quite useful even if you’re not a category-aficionado and implies lemma 2.2 as a special case.

Lemma 2.3 Let be a field and be a group, then for any -algebra and every group homomorphism to the group of units, there is a unique -algebra homomorphism that extends .

Proof The same argument as in lemma 2.2 applies: The only way we can define an extension that is -linear is by sending to , this just follows from the fact that is a basis for . One checks that because is a group homomorphism and the multiplication in is inherited from , this also respects the unit element and multiplication, so it is a -algebra homomorphism.

To see how this implies one direction in lemma 2.2, note that for a vector space , the endomorphism ring is a -algebra (here multiplication is composition) and for the group of units, we get .
Thus if we have a group homomorphism , the universal property tells us that there is a unique -algebra homomorphism . Now we can define a module structure by uncurrying:
Define for and
The fact that is a ring homomorphism translate neatly into the module axioms and -linearity gives us that the -scalar multiplication on remains the same.

Having this perspective is quite useful, because there are a lot of constructions for modules that now carry over directly to representations: we can form direct sums and products of representations, quotients etc. and all the properties of those constructions that we know to hold for modules also hold in this case. For example, subrepresentations as defined in 1.18 are the same as -submodules.

Representations of Cyclic Groups

We will use the accessible example of cyclic groups to show how the structure of the group algebra contains information about representations.

Lemma 2.4 If is cyclic of order , then , where the isomorphism sends to .

Proof One can take the map that sends \to and compute the kernel. As generates , so every element in is a polynomial in , which implies the surjectivity of that map.
Let’s instead show that both satisfy the same universal property:

If is any -algebra, then a -algebra homomorphism corresponds to a group homomorphism by lemma 2.3.
Since is generated by , a group homomorphism is uniquely determined by where it sends . As has order , we can send it precisely to those elements such that (This condition automatically give us that ). Thus has the following universal property for this choice of :
For any -algebra and every element such that , there’s a unique -algebra homomorphism that sends to .

If is still any -algebra, then by the homomorphism theorem, a -algebra homomorphism is the same as a -algebra homomorphism that sends to . A -algebra homomorphism is uniquely determined by where it sends and we can send it to every element in , but due to the condition that must be sent to , for homomorphisms from , we can send to precisely those elements such that .
Thus we have proved:
For any -algebra and every element such that , there’s a unique -algebra homomorphism that sends to .

At this point, to finish the proof, one can either mumble something about the Yoneda lemma with a smug expression or one can make the usual argument why two objects with the same universal property are isomorphic. (This should be familiar to anyone who has seen e.g. why the tensor product is unique)
Let’s do the latter: Because of the universal property of , we can find a unique -algebra homomorphism that sends to . This also works in the other direction: we get a unique -algebra homomorphism . Then is a -algebra homomorphism that sends to itself.
By the universal property, there can only be one such homomorphism, but we know that the identity is an example. Therefore .
By the same argument, .

Exercise Do a similar argument to determine the group algebra where is a product of two cyclic groups as a quotient of the polynomial ring in two variables. Why can this approach not work in this form for nonabelian groups?

We can use this to describe the representations of cyclic groups by decomposing with the Chinese remainder theorem. If we do that, we will end up with a product of rings, which is one of the reasons why it’s useful to think about modules over products of rings. If and are rings, then for every pair where is a -module and is a -module, we can make into an -module by having act on the left factor and on the right factors. The following lemma tells us that every -module arises in such a way:

Lemma 2.5 If and are rings, then every -module is isomorphic to a direct sum where is a -module and is a -module such that just acts on the first factor and just on the second one. and are canonically determined.

Proof Let be a -module. Consider the central idempotents . Then and . Then set and , we get that , so . It’s clear that acts trivialy on and acts trivially on which shows the statement. We can use the same and for all modues, which makes this decomposition canonical.

(Note: The above construction can be enhanced into a category equivalence of and )

Now let’s finally describe the representations of cyclic groups over !

If is cyclic of order , generated by , then by Lemma 2.4, , where the isomorphism sends to . Let , then we have the factorization , so by the Chinese remainder theorem, we get
Note that this is an isomorphism both of rings, but also of -modules, which means that we send to in each component.

Therefore, by lemma 2.5, every -module is a direct sum of -modules where varies. But for each , we have via sending to . Modules over a field are easy to understand: they are just a (possibly infinite) sum of . Thus we get that every -module is a direct sum of copies of .

Through all the isomorphisms, we have kept track where the generator is sent: we send it first to , then to (modulo something different in the CRT isomorphism), then to . This means that acts on the (one-dimensional) -module corresponding to by multiplication with . And in general, all modules are a direct sum of such modules (for different ), this means that really acts as a diagonal matrix where all the diagonal entries are -th roots of unity. (Even for infinite-dimensional representations.)

We can also say something about a more general setting. Suppose that the characteristic of does not divide . Then has distinct roots, so we can factorize where all are irreducible and pairwise distinct. Doing the same Chinese remainder theorem argument we get that:

. Now as the are irreducible, will be a field and the dimension over will be equal to the degree of , so we can again appeal to linear algebra and get the following result:

Lemma 2.6 Let be a cyclic group of order and let be a field such that the characteristic of does not divide and let be the factorization of into irreducibles. Then for every , There is a -module corresponding to such that the dimension of the module is equal to the degree of and in general, every -module is a direct sum of such modules.

Example 2.7 For , the only factors that can occur for are and quadratic factors of the form . By choosing a clever basis for the modules corresponding to the quadratic factors, one obtains rotation representations as in example 1.6 (though the angle of rotation will be instead of in the example.)

Example 2.8 For , the factorization of is well-known, it factors as , where is the -th cyclotomic polynomial. Thus the number of irreducible representations of over is equal to the number of divisors of and for each divisor , there’s an irreducible representation of degree .

We can also use this approach to say something about representations of cyclic groups in the case where the characteristic divides the group order. For simplicity, we just treat the case that has characteristic and is cyclic of order . The factorization of is just and we get. Here the Chinese remainder theorem doesn’t help.
But one can apply the structure theorem for finitely generated modules over a PID mentioned in the first section, noting that every -module is also a -module to get that every finitely generated -module is a direct sum of copies of and . If we look at the action of (which corresponds to the generator of ) on these modules, we see that it acts by multiplication with on , i.e. via the identity (we say “trivially”).
The action on is more interesting: Using the basis given by (the residue classes of) and , we see that the action of corresponds to a transvection action where is a generator of (cf. example 1.7)

We have seen how the algebraic structure of the group algebra can help to understand representations and in our example of cyclic groups, it turned out that when the conditions for Maschke’s theorem are satisfied, the group algebra is a product of fields.
We will investigate the structure of the group algebra in more detail in future posts and see that this was not a coindidence.

This blog post provides mostly some motivation, basic definitions and examples for group representations, up to Maschke’s theorem. Only familiarity with linear algebra and elementary group theory is required for understanding the main part of this post. However, there are some examples for readers with more background which can be safely ignored. The same is true for all categorical remarks.

This will be the first part of a series.

Introduction

The reason to care about groups is because they act on objects. Group actions arise in many different contexts and can provide insight (into the objects as well as into the groups which act on them).

The basic definition of a group action is an action on a set. A set can be thought of as an object without any structure except size (i.e. cardinality). For finite sets, this is more structure than it might seem: we can use counting and divisibility arguments which leads to results such as strong results on p-groups such as the Sylow theorems, the easy-to-prove but ubiquitous orbit-stabilizer theorem or Burnside’s lemma, which has nontrivial combinatorial implications.

Group actions are everywhere in pure mathematics and frequently the object which is acted upon has more structure than just being a set, e.g. a topological space. In these situations, the natural thing to investigate (or to require, if you’re writing a definition) is some compatibility between the group action and the structure on the object. In the example of a topological space, one would require the action to be continuous.

A common structure are vector spaces. Their usefulness seems to stem from the fact that they are both very well understood and still give one a lot of tools to work with: we have duals, tensor products, traces, determinants, eigenvalues etc. Thus, a technique that is used sometimes is “linearization” in which one tries to reduce a problem to linear algebra or at least gain insight by using linear-algebraic methods. Examples are the tangent space of a smooth manifold or variety (and for smooth maps the derivative) or the linearization of a nonlinear ODE.

Representations of groups can be thought of as a linearization of group theory, or more precisely as a linearization of group actions: One can be define them as actions of groups on vector spaces that respect the linear structure. They arise with an ubiquity comparable to (nonlinear, merely “set-theoretic”) group actions. When they do, they are arguably even more useful, since vector spaces have a lot more structure than sets, as described in the previous paragraph.

Group Representations

Definition 1.1 Let be a field. Then for a group , a (-linear) representation on a -vector space is a map that satisfies:

where is the neutral element.

Definition 1.2 In the above situation, the degree or dimension of the representation is the dimension of over .

Note that the first two axioms state that a representation is a group action and the other two axioms state that for any fixed the map is -linear. This map is also invertible, since the axioms imply that is an inverse. By the second axiom, we also get that the map is a group homomorphism. So by using a currying argument, we have seen that every group representation gives rise to a group homomorphism .

Conversely, given a group homomorphism , we can uncurry to get a representation on by setting

Thus we have another characterization, which allows us to apply concepts defined for group homomorphisms like kernel/image etc, whereas the first characterization allows us to apply notions defined for group actions.

While we’re at it, we may as well add a rephrasing in categorical language of the last characterization and get the following

Lemma 1.3 Let be a field, then for a group and a vector space over , the following data are equivalent:

A representation of on as in definition 1.1

A group homomorphism

A covariant functor from considered as a one-object category to the category of -vector spaces that sends the single object in to

This is the direct analog of equivalent characterizations of group actions: we can also view them as homomorphisms to the group of permutations of a set or as functors from a group to the category of sets.

(Viewing a group as a category works like this: Let be a group, then define a category with a single object and set . Composition , i.e. is just group multiplication. Associativity and having an identity follows from the group axioms.)

The last characterization allows one to apply constructions from category theory, such as composing representations with other functors, but we will not use it in this post except as an alternative descrition.

Remark 1.4 One can also consider representations of monoids or the case where is any ring and is a module.

Remark 1.5 We have defined only left representations. Reversing the chirality in the definitions is straightforward and gives rise to the notion of right representations.

Now for some examples.

As a zeroth example, note that representations of the trivial group are just vector spaces.

Example 1.6 Let be a finite cyclic group of order with a fixed generator , then a homomorphism from to any group is determined by where it sends and we can send to precisely those elements such that , so a representation of is just a linear automorphism that satisfies this equation.
For , one can take for example a rotation matrix to define a two-dimensional representation of .
For one can take to get a one-dimensional representation of . Both representations correspond to having act by a counterclockwise rotation of degrees. The former can be obtained from the latter by restricting scalars from to .

Example 1.7 Let , the additive group of , then acts on via transvections:

Example 1.8 If is a -set (a set equipped with an action from ), then we can consider a vector space such that the basis elements are indexed by . We can then define the action on the basis elements by setting . This permutation of the basis elements extends uniquely to a linear automorphism of and we get a respresentation, called the permutation representation associated to the group action.
From a categorical standpoint, if we view -sets as functors and representations as functors , then this construction is just composing a group action with the free module functor . (Thus this construction is also functorial with the notion of morphisms of representations to be defined later.)

Example 1.9 Let be any vector space. Let . Then the identity defines a representation. This corresponds to the natural action of on that comes from the definition of as -linear automorphisms.

Example 1.10 Generalizing the last example, all classical matrix groups such as , , , etc. are defined as subgroups of some general linear group, so that the subgroup inclusion defines a representation.

Example 1.11 Let be a finite group and let be a prime number. Suppose is a normal abelian subgroup of exponent . Then is a vector space and the conjugation action of on is -linear (this is automatic: any group homomorphism between vector spaces over is -linear.), thus we obtain a -linear representation.

Example 1.12 Let be a Galois extension. Then acts by definition on by -linear field automorphisms. We can just forget the “field automorphism” part and consider just as a -vector space, then we get a representation .
If and are number fields with rings of integers and and is a non-zero prime ideal in , then then acts on , giving a -linear representation.

Example 1.13 Let and let be any vector space over , then acts on where we tensor copies of . Then acts on by permuting the components of the tensors. Explicitly, if is an elementary tensor, then we can define the result of the action of on that to be .

Example 1.14 Let be a finite-dimensional -algebra, then the group of units acts on by conjugation and this action is -linear, and thus we obtain a representation of the unit group on the underlying vector space of the algebra. (If and , then a suitable restriction of the domain and codomain of this representation gives a description of the Hopf fibration.)

Example 1.15 In the same spirit as the last example, let be an algebraic group over . Let be the Lie algebra of . For any , the conjugation map is a smooth automorphism of , so we can take the derivative at the identity and get a linear automorphism . The map is a representation, called the adjoint representation of . (The same construction works verbatim for Lie groups.)

Example 1.16 Let be a smooth connected manifold and let be a vector bundle with a flat connection. Let be a base point and set and . If we take a smooth loop based at , parallel transport along that loop defines an automorphism of .
The flatness condition implies that this automorphism depends only on the homotopy class of and by smooth approximation, every homotopy class of continuous loops may be represented by a smooth loop, thus we obtain the holonomy representation . It turns out that this representation uniquely determines the flat bundle.

As is common practice with group actions, if is a representation, we also write just for or for the map . By further abuse of notation, we will also just call a representation of where the action is clear from the context.

Definition 1.17 If and are -linear representations of a group for some field , then a morphism of representations (also called intertwining operator) from to is a -linear map such that . (i.e. is -equivariant.)

Note that if we consider representations as functors, then a morphism of representations is just a natural transformation. Indeed, for any , naturality with respect to as a morphism is precisely the requirement that for all .

Example 1.17 In the situation of example 1.13, let , then we can define by acting on each factor: for an elementary tensor. Since we act in the same way in each component, this commutes with permutation of the factors, thus defines a morphism of the representation of given by permuting the factors in the tensor product.

Definition 1.18 If is a representation of , then a subspace that is -invariant (i.e. for all ) defines again a representation of . These subspaces are called subrepresentations of .

If is a subrepresentation of , then the inclusion is a morphism of representations, which gives a (quite general) family of examples for morphisms.

Example 1.19 In the situation of example 1.7, consider the subspace of spanned by , this is a subrepresentation because .

Example 1.20 Given a morphism of representations, the kernel and the image are subrepresentations of the domain and codomain, respectively.

Example 1.21 In the situation of example 1.8, suppose that is a sub -set, i.e. we have for all , then is itself a -set and if we apply the same construction to , the resulting vector space is a subspace of in a canonical way, and so also a subrepresentation. (This is a special case of the mentioned functoriality of this construction.)
If is finite, another subrepresentaion is given by the span of .

We now come to the first substantial theorem about representations.

Theorem 1.22 (Maschke) Let be a finite group and suppose that the order is invertible in . Then if is a finite-dimensional representation and is a subrepresentation, then there exists another subrepresentation such that ,.

Proof By linear algebra, we can find a -linear projection , i.e. we have that and is the identity on . We have that , but of course, will not be a morphism of representations in general. The idea is to “average” to get another projection onto that is a morphism of representations.
Set (Here we use that is invertible in ). This will be -linear again. This is a morphism of representations, as for , we have . Since is a subrepresentation and is the identity on , is also the identity on (it is crucial that we divided by for this step.) and the image is also contained in , so is still a projection onto .
Therefore, the kernel is a complement of and as is a morphism of representations, the kernel is a subrepresentation.

Example 1.23 To show that the assumptions in Maschke’s theorem are necessary, consider the transvection representation of the additive group of on described in example 1.7 and 1.19. Here acts via . As described in example 1.19, the subspace of vectors in with second component is a subrepresentation.
But this subrepresentation doesn’t have a complement that is also a subrepresentation: Indeed, if is any vector in such that , then and are linearly independent, as they are clearly not multiples of each other. Thus any subrepresentation that is not contained in is the whole of , so doesn’t have a complement.
This serves as a counterexample in two different ways: if we take to be a finite field, it shows that the assumption that the order is invertible is necessary. If we take to be an infinite field (say of characteristic ), then it shows that even in characteristic , the conclusion doesn’t need to hold when the group is infinite.

In this previous post, tensor products of -sets were introduced and some basic properties were proved, this post is a continuation, so I’ll asume that you’re familiar with the contents of that post.

In this post, unless specified otherwise, will denote groups. Groups will sometimes be freely identified with the corresponding one-object category. Left -sets will sometimes just be referred to as -sets.

After some lemmas, we will begin this post by some easy consequences of the Hom-Tensor adjunction, which is the main result of the previous post.

Lemma The category is complete and cocomplete and the limits and colimits look like in the category of sets (i.e. the forgetful functor is continuous and cocontinuous) with the “obvious” actions from . Similarly for .

Proof This is not too difficult to prove directly (you can reduce the existence of (co)limits to (co)products and (co)equalizers by general nonsense), but it also follows directly from the fact that is the functor category . The reason is that if and are categories and is (co)complete (and is small to avoid any set-theoretic trouble), then the functor category is also (co)complete and the (co)limits may be computed “pointwise”. In the case of , has only one object, so the (co)limits look like they do in .

Lemma If is a -set, is a -set and is a -set, then we have a natural isomorphism of -sets

Proof The proof is the same as the proof for modules, mutatis mutandis. Use the universal property of tensor products a lot to get well-defined maps and .

Lemma If is a subgroup, and we regard as a -set via left and right multiplication, then is naturally isomorphic to the restriction functor (this functor takes any -set, which we may think of as a group homomorphism or a functor and restricts it to the subgroup/subcategory given by .)

Proof Define a (natural) map via . This is -equivariant, because . On the other hand, given (which is just as a set), we can define via . This defines an inverse for .

Via the Hom-Tensor-Adjunction this implies

Corollary The restriction functor has a left adjoint .

The notation is chosen because we can think of this functor as an analog to the induced representation from linear representation theory, where we think of group actions as non-linear represenations. (Similar to the induced representation, one can give an explicit description of after choosing coset representatives for etc.)
In linear represenation theory, the adjunction between restriction and induction is called Frobenius reciprocity, so if we wish to give our results fancy names (as mathematicians like to do) we can call this corollary “non-linear Frobenius reciprocity”.

If we take to be the trivial subgroup, we obtain a corollary of the corollary:

Corollary The forgetful functor has a left adjoint, the “free -set functor”.

Proof If is the trivial group, then -sets are the same as sets and the restriction functor is the same as the forgetful functor. Since commutes with coproducts and is a one-point set, we can also describe this more explicitly: for a set , we have

We can also use the Hom-Tensor adjunction to get a description of some tensor products. Let denote a one-point set (simultanously the trivial group), considered as a -set with (necessarily) trivial actions.

Lemma For a -set , is naturally isomorphic to the set of orbits and both are left adjoint to the functor which endows every set with a trivial -action.

Proof Let be a set and be a -set. Denote the -set with as its set and a trivial action. If we have any , then must be constant on the orbits, since , so descends to a map of sets . Conversely, if we have any map , then we can define a -equivariant map by setting , where denotes the orbit of . These maps are mutually inverse natural bijection which shows that “set of orbits”-functor is left adjoint to . On the other hand, we can identify with (where the action is induced from the trivial right -action on ), so the left adjoint must be given by . Since adjoints are unique (by a Yoneda argument), we have a natural bijection

The set of orbits carries some information about the -set, but we can do a more careful construction which also includes in a natural way as part of the information.

Definition If is a -set, then the action groupoid is the category with and . Composition is given by .

The fact that this is called a groupoid is not important here, one can think of that as just a name (it means that every morphism in is an isomorphism).
The set of isomorphism classes of correspond to the orbits . For , the endomorphisms is the stabilizer group . The following lemma shows how to reconstruct a -set from , assuming that we know how all the Hom-sets lie inside .

Lemma (“reconstruction lemma”) If is a -set, then we define the functor with for all and for , we define the map via . Then we have

Proof For , define a map via . This defines a cocone over , so we get an induced map . can be described explicitly as , where the equivalence relation is generated by . To see that is surjective, note that is the image of . To see that is injective, suppose and are sent to the same element, i.e. , then we have , so that we may assume . Then implies that , so the two elements which map to the same element are already equal in .

The previous lemma can be thought of as a generalization of the orbit-stabilizer theorem. (The proof has strong similarities as well.) For illustration, let us derive the usual orbit-stabilizer theorem from it.

Lemma Let be a -set, then we have an isomorphism of -sets , where is the orbit of (with the restricted action) and is the coset space of the stabilizer subgroup with left multiplication as the action.

Proof We may replace with so that we have a transitive action. Then the previous lemma gives us an isomorphism .
Consider the one-object category . This can be identified with a full subcategory of corresponding to the object . Because we have a transitive action, all objects in are isomorphic (isomorphism classes correspond to orbits), so that the inclusion functor is also essentially surjective, so it is a category equivalence.
We may thus replace the colimit by the colimit . As has just one object, this colimit is a colimit over a bunch of parallel morphism , so it is the simultanous coequalizer of these morphisms. We know how to compute coequalizers in : the same way that we compute coequalizers in . So we have the families of maps , where varies over . The coequalizer is the quotient , where is generated by for each and . But this is exactly the equivalence relation that defines .

There is another case where the colimit takes a simple form after replacing with an equivalent category.

Lemma A -set is free in the sense that it is in the essential image of the “free -set functor” or equivalently it is a coproduct of copies of with the standard action iff the action of on is free in the sense that .

Proof It’s clear that if we have a disjoint union , then no element of other than can fix an element in . For the other direction, suppose that we have the condition . This implies that the morphism sets in the action groupoid are really small: Suppose such that , which implies that , so by assumption, thus . This means that for any pair of objects in , there is at most one morphism between them. So if we consider the set as a discrete category (i.e. the only morphisms are the identities), then if we take a representative for each orbit , this defines an inclusion of categories . As elements in represent isomorphism classes in , this inclusion is always essentially surjective. By our computations of the Hom-sets, it is also fully faithful if the action of on is free. So if we apply the “reconstruction lemma” we get . But a colimit over a discrete category is just a coproduct, so this is isomorphic to which shows that is free.

After some further lemmas, we will come to the main result of this post, which is also an application of the reconstruction lemma.

In the previous post, I described -sets in different ways, among them as functors , but I didn’t do the same for -sets. The following lemma remedies this deficiency.

Lemma-sets may be identified with left -sets or with functors or with functors . In other words, we have equivalences of categories .

The proof of this lemma is a lot of rewriting of definitions, not more difficult than proving the corresponding statements for one-sided -sets.

This lemma has a useful consequence, which one could also verify by hand:

Observation If is a functor and is a -set, then is a -set in a “natural” way.

Proof Think of as functor , composing with , gives us a functor , which we may also think of as a -set.
More explicitly, the action of on can be described as follows: for , the right-multiplication-map is left -equivariant, so it induces a left -equivariant map , we can define the action of on via this map.

The following lemma is an analog of the classical Eilenberg-Watts theorem from homological algebra which describes colimit-preserving functors as tensor products with a -bimodule.

Thereom (Eilenberg-Watts theorem for group actions)Every colimit-preserving functor is naturally equivalent to for a -set . One can explicitly choose (with the -set structure from the previous observation, as is a -set.)

Proof Let be a -set and be a -set, then we have a natural bijection
Using the reconstruction lemma, we get
For every , , so via the map . We need to consider how this identification behaves under the morphisms involved in the colimit. For , we have the map , this induces a map given by . If we make the indentification described above by evaluating both sides at , we get . Using the definition of the -action on the Hom-set , this left multiplication translates to right multiplication on . Because of the construction of the right -action on , this right multiplication is the map that is induced from right multiplication . We may summarize this computation by stating that
Using the assumption that preserves colimits, we get where we used the reconstruction lemma again in the last step.
We conclude by the Yoneda lemma.

This theorem (like the classical Eilenberg-Watts-theorem) is remarkable not only because it gives a concrete description of every colimit-preserving functor between certain categories, but also because it shows that such functors are completely determined by the image of one object and how it acts on the endomorphisms of that object (which are precisely the right-multiplications.)

It’s natural to ask at this point when two functors of the form and for -sets are naturally isomorphic. It’s not difficult to see that it is sufficient that and are isomorphic as -sets. The following lemma shows that this is also necessary, among other things.

Lemma For -sets and , every natural transformation is induced by a unique -equivariant map

Proof Assume we have a natural transformation , then we have in particular a left -equivariant map . We have and , so this gives us a -equivariant map which I call . Clearly is uniquely determined by this construction. For a fixed , right multiplication by defines a left -equivariant map . Under the isomorphism these maps describe the right action on . Naturality with respect to these maps implies that is right -equivariant.

This lemma allows a reformulation of the previous theorem.

Theorem (Eilenberg-Watts theorem for group actions, alternative version)
The following bicategories are equivalent:
– The bicategory where the objects are groups, 1-morphisms between two groups are -sets , where the composition of 1-morphisms is given by taking tensor products and 2-morphisms between two -sets are given by -equivariant maps.
– The 2-subcategory of the 2-category of categories where the objects are all the categories for groups , 1-morphisms are colimit-preserving functors and 2-morphisms are natural transformations between such functors.

This concludes my second blog post. If you want, please share or leave comments below.

Suppose is a ring, is a right -module and is a left -module, then we can form the tensor product that we all know and love. If we consider that a module is just an abelian group together with an action from a ring, one might ask the question: Can we imitate this construction for group actions?

It turns out that we can; this is what I’ll investigate in this post.

Recall that for a fixed group a left -set is a set together with a map that satisfies . An equivalent formulation is that a left -set is a set together with a group homomorphism , where denotes the group of bijections . Yet another way to phrase this definition is that a left -set is a functor from (regarded as a one-object category) to the category of sets.

There’s an obvious choice of morphisms for left -sets and , namely maps that satisfy . Traditionally called -equivariant maps. If you take the definiton via functors, then these are exactly natural transformations. It follows that the left -sets form a category, which we will denote by .

In a similar way, one defines right -sets as sets together with a map that satisfies , these can also be thought of as contravariant functors from to or equivalently as left -sets, where denotes the opposite group of . The category of right -sets will be denoted by .

If we have two groups and , then a -set is a set that is simultaneously a left -set and a right -set, such that the actions are compatible in the sense that . We get the category of -sets where the morphisms are maps that are both and -equivariant. Note that if we take or to be the trivial group, then -sets are equivalent to left -sets or right -sets respectively, thus we may always assume to deal with -sets and will cover all three cases. Also note that if we take both and to be the trivial group, then -sets are just sets.

We are now ready to begin our renarration of the story of tensor products, where groups play the role of rings, sets play the role of abelian groups and left and right -sets play the role of modules.

Since -Sets are important in the theory of tensor products for modules, we will study these first.

In the following, and will denote groups.

Lemma If is a -set and is a -set, then is a -set with the actions defined by and . If is a -set and is a -set, then is a -set in the analogous way.

The proof is a routine verification.

As a special case, when is just a set and is a -set, then is a -set, with the action on the opposite side. If is a -set, then the two actions we get this way are compatible, so that is a -set.

We can copy the following definition almost verbatim from the case for modules.

Definition If is a -set, is a -set and is a -set, then a map is called -balanced and -equivariant if and . We denote the set of all such maps by . If and are the trivial group, we drop them from the notation and just write .

Lemma If is a -set, is a -set and is any set, then is a -set with the actions defined by and .

This is again a routine verification, similar to the previous lemma.

We now come to the usual “Currying” argument.

Lemma If is a -set and is a -set and is any set, then there is a natural isomorphism of -sets

Proof We define the “Currying map” via , where . Or more compactly
Let us check that for the map is -equivariant, for is defined as . Because is -balanced, this equals , so .
Now we check that is – and -equivariant. Let , then for all , we have , on the other hand we have , this shows that .
Let , then for all , we have , on the other hand , so .
So we have shown that is indeed a well-defined -equivariant map .
To see that is bijective, note that an inverse is given by the “Uncurry”-map , where .
We omit the verification that is natural in , and .

We also give the following variants of the Currying isomorphism:

Lemma If is a -set, is a -set and is a -set, then we have a natural bijections
and

Proof The proof is very similar to the last one, so we omit some steps.
For the first bijection, let us just show that the map , where is well-defined.
For a fixed , we have , so is -equivariant.
Let , then .
So we get , thus is -equivariant.
For the second bijection, we use the map , where . The verification that this is a well-defined bijection is very similar to the previous computations.

We now come to the main definition of this post.

Definition Let be a right -set, be a left -set, then a tensor product is a set together with a -balanced map that satisfies the following universal property: for any set and -balanced map , there is a unique map such that .

Lemma In the situation of the previous definition, the tensor product exists and if is a second tensor product, then there exists a unique bijection such that .

Proof We first prove uniqueness. Suppose and , then because of the universal property of , we get a unique map such that and due to universal property of we get a unique map such that , then satisfies , but by the universal property of , there is a unique map that satisfies . We have just computed that satisfies this, but does, too. Thus . The proof that follows analogously.

Now we show existence. Consider the equivalence relation on generated by . We set and let be the quotient map. By construction, is -balanced.
We denote the image by . We have the relation . (Note that unlike in the case of modules, is surjective. So every element is an “elementary tensor”.)
Suppose is a set and is -balanced, then the map is well-defined and it is the unique map that satisfies . (This follows just from the universal property of a quotient set.)
In the case of modules, when we have a -bimodule and a -bimodule , then is a -bimodule. We know show the analogous result for -sets.

Lemma If is a -set and is a -set, then is a -set.

Proof Let , then the map is -balanced, because is a -set, so this map descends to a well-defined map , which shows that the map is well-defined. It’s clear that this makes into a left -set. In the same manner, we get a right action of on given by . These actions are compatible, since .

One of the basic properties of the tensor product of modules is that it defines a functors between module categories, this works also for -sets.

Lemma/Definition The tensor product can be made into a bifunctor

Proof Suppose and are -sets, and is a -equivariant map, and are -sets and is a -equivariant map. Consider the map . This is -balanced, because .
So we get a well-defined map , which we will denote by . is -equivariant because ist left -equivariant and is right -equivariant and the -set structure on a tensor product is defined by acting with on the left factor and with on the right factor.
If is another -set and is another -set and are morphisms of the respective type, then if we consider the map both and are maps which satisfy , thus by the uniqueness part of the universal property, they must be equal. That is clear from the definition.

Much of the utility of tensor products of modules lies in the Hom-Tensor-adjunction, which also has an analog for group actions. There’s not much left to do to prove it.

Theorem For a -set , the functor , is left-adjoint to the -functor .

Proof The universal property of the tensor product can be reformulated in the form that the map is a bijection. It’s not difficult to check that this map is natural in and and that if is a -set, then it restricts to a bijection . If we compose this bijection with the Currying bijection , we get a natural bijection

This concludes this first post on tensor products of -sets, I will investigate more properties of this construction in future posts. Feel free to leave comments below.