Note the coefficient is symmetric by the interpretation in terms of 0/1 matrices. As a recap of the last part of last lecture, we have the following two generating function identities:

The first one can be seen by expansion of right hand side. The second one uses the interpretation as the number of 0/1 matrices with prescribed row sum and column sum (check it on one monomial term from and one from , and then symmetrize).

Theorem 1 form a basis for and hence are algebraically independent over .

Proof: The second statement follows because every algebraic relation of ‘s can be written as a multivariate polynomial, which each term a multiple of a basis vector . The proof of the first statement starts with the axiom that ‘s form a linear basis (by ocnstruction of ). We need the following Fact due to Gale and Ryseath (check spelling): iff and if . Here the partial order is the dominance order. In fact one could also take the lexicographical order since it refines dominance and if the cardinalities of and are different, has to be 0 (exercise). Persi only proved the only if direction of the first part and the case using a push-to-the-left argument. Suppose , then left be a 0/1 matrix with the prescribed row and col sums. Push all the 1’s in A to the left. This operation increases the rank of the partition determined by the columns of under dominance, in fact strictly increases if the operation is nontrivial. This last part also gives the case. Thus is unipotent upper triangular under a total ordering of partitions of , that refines dominance, and such that the transpose also refines transposed dominance, i.e., if . Since a unipotent upper triangular matrix transform one basis to another, also form a basis.

Next we introduce the homogeneous symmetric functions where . Note the only difference with is the nonstrictness of the inequalities among ‘s. By geometric expansion, we get

Define as the transition coefficients from to , then it has the interpretation as the number of matrices with entries in (nonnegative integers) with row sum ,and col sum . This makes sense since the inequalities in the sum above is not strict (which gives the Bose-Einstein allocation rather than Fermi-Dirac). cannot be put into upper triangular form. We have another handy generating function relation:

which makes sense since the monomials in have repeated factors, which is provided by the geometric series on the left hand side. The following algebra homomorphism is an involution , hence in particular an isomorphism. I wonder if this has to do with the canonical isomorphism between Fermion and Boson representations of the Heisenberg algebra (See Wang Dong’s paper on KP tau function of the partition function of GUE with external source). Proof: If we look at the generating functions of and , given by and , we see their product is , which means coefficients of , , are all , i.e.,

, for all . This system of equations define ‘s, as seen by Grahm-Schmidt. Applying to both sides of all these equations, we easily get . I would like to know a more intrinsic definition of .

The last basis we study is power sum polynomials, defined in the first lecture. Their generating function is

where is the size of the conjugacy class of . Thus we get a better looking formula taking log of the first and last expressions. A similar formula is

where is the signature of . Note one can get the first equality of the second formula from the first by substituting . We will only prove the first formula: Using Polya’s cycle index formula

with and , we immediate obtain the second equality. The first equality is another expansion and counting result. Notice the and power must match for each monomial term because the power sums in each term have the same partition index . It would be nice to get a direct proof of the equality skipping the expression . One last observation:

. This is obtained by applying to proved before, and using the two formulae proved immediately above (exercise).