Consider a finite group. Tannaka-Krein duality allows to reconstruct the group from the
category of its representations and additional structures on it (tensor structure + fiber functor). Somehow trying to wrap my mind into details
sometimes a feeling arise that it is really "abstract nonsense" which hides something under the carpet. So the question is about how to get more down-to-earth level of understanding.

Finite group - is something very explicit - it is easy to "explain" to a computer.

On the other hand the Tannaka-Krein reconstruction requires as the input the "category+fiber functor+monoidal structure" - it seems to me not very explicit thing - it is "difficult" to explain to a computer, more precisely this input datum seems to be infinite. That does not seems to be satisfactory. So my question:

Question: Can one describe the input for Tannaka-Krein reconstruction theorem (for finite group) as some FINITE datum ?

If yes, we can put this datum to a computer and reconstruct the group.
If it is impossible that would be quite strange.

1 Answer
1

The infinitude of the input data is deceptive. For example, an infinite finitely presented group may appear to be infinite input data but it's fully specified by a finite alphabet and a finite set of words in it. The same sort of thing happens here; everything is "finitely presented" in a suitable sense.

First, let's recall the statement of Tannaka-Krein duality here so we can see exactly how the group arises. Let $G$ be a finite group and let $\text{Rep}(G)$ be its category of representations (say over $\mathbb{C}$). $\text{Rep}(G)$ is equipped with a monoidal structure in the form of the tensor product of representations and with a forgetful functor $F : \text{Rep}(G) \to \text{Vect}$. We are interested in studying natural automorphisms $\eta : F \to F$ compatible with the tensor product in the sense that the obvious diagram (which can be found on page 8 here for completeness) commutes. Every element of $G$ gives an element of this group of natural automorphisms, which we'll denote $\text{Aut}^{\otimes}(F)$.

Theorem: The natural map $G \to \text{Aut}^{\otimes}(F)$ is an isomorphism.

So our reconstruction begins first with the group $\text{Aut}(F)$ of natural automorphisms of $F$, then cuts out a subgroup satisfying some extra compatibility conditions. Let's start with figuring out what data we need to describe $\text{Aut}(F)$.

Explicitly, an element of $\text{Aut}(F)$ is a family $\eta_V : F(V) \to F(V)$ of automorphisms of the underlying vector spaces of all representations of $G$ which is compatible with all morphisms between representations. There is a universal such representation generated by a single element, namely $\mathbb{C}[G]$ regarded as a representation via left multiplication, and compatibility with all morphisms $\mathbb{C}[G] \to V$ implies that any such natural transformation $\eta$ is completely determined by what it does to $\mathbb{C}[G]$. On $\mathbb{C}[G]$ any such natural transformation needs to be compatible with right multiplication, and so must be left multiplication by some invertible element of $\mathbb{C}[G]$. Conversely any such element gives an element of $\text{Aut}(F)$. Hence

where the $V_i$ are the irreducible representations of $G$. (Secretly we are using the Yoneda lemma but I wanted to be completely explicit.) This reflects the fact that the data of the category and the fiber functor is equivalent to the data of the number and dimensions of the irreducible representations respectively. So we can use these as our initial data:

Data 1, 2: The number $n$ of irreducible representations and their dimensions $d_i$.

This step of the reconstruction reflects a more general fact, namely that if $R$ is a $k$-algebra, $\text{Mod}(R)$ the category of left $R$-modules, and $F : \text{Mod}(R) \to \text{Vect}$ the forgetful functor, then the natural endomorphisms of $F$ are canonically isomorphic to $R$ (as a $k$-algebra), which again follows from the Yoneda lemma.

The tricky part is how to describe the influence of the tensor product. Explicitly, let $\eta : F \to F$ be an element of $\text{Aut}(F)$ again. Then there are two ways to use $\eta$ to write down an automorphism of the underlying vector space of a tensor product $F(V \otimes W)$:

On the one hand, there are two maps $\eta_V : F(V) \to F(V)$ and $\eta_W : F(W) \to F(W)$, and we can tensor them to get a map $\eta_V \otimes \eta_W : F(V) \otimes F(W) \to F(V) \otimes F(W)$. Since $F$ is a tensor functor this gives a map $F(V \otimes W) \to F(V \otimes W)$.

On the other hand, there is a map $\eta_{V \otimes W} : F(V \otimes W) \to F(V \otimes W)$.

$\eta$ lies in $\text{Aut}^{\otimes}(F)$ if and only if these are equal. More explicitly, if $\eta$ is left multiplication by some element $\sum c_g g \in \mathbb{C}[G]$, then the first map is

So how do we describe this restriction to a computer? The key point is that since everything in sight is compatible with direct sums it suffices to restrict our attention to the irreducible representations $V_i$, so we only need to compare the maps $\eta_{V_i} \otimes \eta_{V_j}$ and $\eta_{V_i \otimes V_j}$ for all $i, j$. This means that we need to describe

How knowing $\eta_{V_i}$ for all $i$ determines $\eta_{V_i \otimes V_j}$ for all $i, j$.

This requires that we know first of all the decompositions

$$V_i \otimes V_j \cong \bigoplus_k m_{ijk} V_k$$

of the tensor products of the irreducibles into irreducibles. So this is our third piece of data:

Data 3: The multiplicities $m_{ijk}$.

These multiplicities are equivalent to the data of the character table of $G$, and in particular it is possible to compute Data 1, 2 from this data, so in some sense Data 1, 2 are redundant. But we already know that the character table is not enough. The multiplicities only tell us about $\eta_{V_i \otimes V_j}$, but not about $\eta_{V_i} \otimes \eta_{V_j}$.

To get our last piece of data, let's think about how a computer would represent $\eta$. Specifying linear transformations $\eta_{V_i} \in \text{GL}(V_i)$ requires writing down a basis of each $V_i$. These bases give rise to two different bases on the tensor products $V_i \otimes V_j$: on the one hand the tensor product basis, and on the other hand the basis coming from the decomposition into irreducibles $\bigoplus_k m_{ijk} V_k$. So the final piece of data we need is the identification between these:

Data 4: For every pair $i, j$, the $(d_i \times d_j)$ by $(d_i \times d_j)$ transition matrix from the tensor product basis to the decomposition basis of $V_i \otimes V_j$.

(This is essentially the data of the identifications $F(V \otimes W) \cong F(V) \otimes F(W)$ making $F$ a tensor functor: unlike in the case of maps between monoids, being a tensor functor is a structure, not a property, because these maps are required to satisfy coherence conditions.)

This is the data we need to write $\eta_{V_i} \otimes \eta_{V_j}$ and $\eta_{V_i \otimes V_j}$ as matrices with respect to the same basis. These transition matrices should be definable over a splitting field of $G$ at worst, so this really is finite data.

Data 4 is a little different from the others because it's the only data that we need to make some choices to specify. The data is acted on by all possible changes of bases, hence by $\prod \text{GL}(V_i)$, and the invariant part of the data is the orbit of what's written above under this action, but I don't know an easy description of these orbits.
–
Qiaochu YuanJan 26 '14 at 21:14

I should clarify what the computer actually does from here. Data 4 can be used to write the condition $\eta_{V_i} \otimes \eta_{V_j} = \eta_{V_i \otimes V_j}$ as a collection of polynomial (in fact quadratic) conditions on the entries of an element of $\prod \text{GL}(V_i)$. The computer needs to solve this system of polynomial equations, and the end result describes $G$ as a finite subgroup of $\prod \text{GL}(V_i)$.
–
Qiaochu YuanJan 26 '14 at 22:07

Thank you very much! Ill need some time to go over it
–
Alexander ChervovJan 27 '14 at 4:41

It's still a lot to write down, alas. I believe that any finite simple group is generated by two elements, so giving the group requires two $d_2 \times d_2$ matrices. Actually, let's count: in data 4 the number of matrix entries is $\sum_{i,j} d_i^2 d_j^2 = (\sum_i d_i^2)^2 = |G|^2$. Gulp.
–
Allen KnutsonJan 27 '14 at 9:57

@Allen: well, it's the same as the number of entries in the multiplication table of $G$...
–
Qiaochu YuanFeb 18 at 20:15