In this post we will discuss generating functions. Generating functions are a powerful tool which can be used to solve many problems both in combinatorics and also in analysis.

Let denote the set of all sequences of real numbers. For all we let denote the sequence which has value for its th term and all its other terms are zero. The symbol 1 stands for the sequence . Also for any real number we define the product of and a sequence as . We let two sequences and be equal if for all . We define the sum of and by the sequence and the product by the sequence where . Clearly the sequence is equal to the sequence which we will also denote simply by . Note that here stands for the sequence obtained as a product of and 1, i.e. . Algebraically speaking , equipped with these operations, is an -algebra.

More importantly there is an analytic viewpoint of also. Readers who are familiar with the theory of power series can consider the elements of to be power series, i.e. each element is basically a function with its domain an open interval (or simply ). By standard theorems in analysis, if and both converge for then for all such , if and only if for all $i$. Hence the approach of considering as a purely formal object may be considered equivalent to considering it as a power series.

However, we will soon see as to why convergence issues do not play any role in our context as long as the power series converges for at least one non zero and there is more value in interpreting as simply an element of .

Definition:
Let be a real sequence such that the power series converges for at least one non zero . Then the function which sends such an to the power series is called the generating function of the sequence. We frequently abuse notation and refer to the value of the generating function at some non-zero point as the generating function (which is akin to referring the function as ).

Example:
Let be the constant sequence of one’s. It is well known that for any real number , if the series converges to . So the generating function of is where for all at which the series converges.

The reason for requiring convergence at a non zero point is as follows. As soon as we have convergence at a non zero , by a theorem of analysis it follows that there is convergence in an open interval around . Now, has a unique power series expansion in that interval and so we are guaranteed that there is a one-one correspondence between the purely discrete object thought of as an element of and the generating function . This can be exploited in the reverse direction, for if we wish to recover our sequence from the function , then since is defined by there is absolutely no ambiguity, and we cannot get back any other sequence. In fact, we may say that our sequence has been encoded within the definition of , as a closed form expression .

If convergence was only given at , then such a one-one correspondence is not possible, since any closed form analytic function , which is at would become the generating function to the sequence . So for any sequence we will consider its generating function to be defined by as long as there is a non zero $x$ for which there is convergence, and once we have done that will not bother about any convergence issues at all.

The reader may be wondering what was the point of giving an algebraic approach initially, for a generating function really seems to have to do more with a power series. Furthermore in the notation of the first paragraph when we were considering a sequence as an element of we gave found that in our algebra is nothing but . This was not a power series but simply our notation for a sequence. It may appear confusing to have the same notation for two different objects, but it has been deliberately adopted for a very good reason. During our computations, we often manipulate our series so that we may no longer be sure whether convergence of a given series at a non-zero point is guaranteed. This poses a mathematical problem of the validity of our operations. However our ambiguous notation comes to our rescue at that instant, for what we really are doing at that point, without explicitly saying so, is not dealing with the series . Instead we are manipulating the sequence with which there are absolutely no concepts of convergence attached. Of course if we need closed form expressions or some other analytic properties have to be established we need convergence so that one can use the one-one correspondence between the sequence and the generating function and dive in the world of analysis. In this way, a constant interplay between the discrete world of sequences and the analytic world of sequences brings out the real power of generating functions.

In this post we discuss automorphic numbers. Note that all numbers involved in this post are non negative integers.

Definition: Let have digits in its base representation. Then is said to be automorphic if .

In our base representation this means that the last digits of are precisely . In other words, the digits of are appended to . Note that since the digits of are appended to they are also appended to for any .

A list of some automorphic numbers can be found here. Our aim here is to characterize all such numbers in base .

Theorem: The automorphic numbers in base are given by .

Proof: Suppose is an automorphic number of digits. Then,

which is equivalent to

which is equivalent to the occurrence of exactly one of the following:

Case 1:

Case 2:

Case 3:

Case 4:

Now we consider all these cases one by one. Case 1 is equivalent to . Since had digits so this means . Similarly in case 2.

Now suppose case 3 holds. By the Chinese Remainder theorem there exists a unique solution to these two congruences modulo . We show that is a solution to them. By uniqueness it will then follow that case 3 is equivalent to saying that . Since has digits and has at most digits so we will therefore have equal to . Hence .

We first show that . Proceed by induction on . For note that .

Now assume the result for . Then and squaring yields and so thereby completing the induction step. Next observe that as so follows immediately, thereby completing case 3.

Note that our argument also implies that any number of the form is automorphic. Indeed, if had digits then we have shown that . Now as divides so , i.e. is automorphic.

To finish the proof we now consider case 4. We show that is a solution to the congruences of case 4. Clearly as . To show we proceed by induction as before. The base case may be explicitly worked out or be shown to hold by invoking Fermat’s theorem: , for any prime and integer . Next assume the result for , so that . Raising both sides to the fifth power yields the result for . The rest of the argument is similar.

One of the important results in linear algebra is the rank nullity theorem. Here I am going to present a proof of it which is slightly less known. The reason I like this proof is because it ties together many concepts and results quite nicely, and also because I independently thought of it.

The theorem (as is well known) says that if are vector spaces with and a linear map then .

In this proof I will further assume that is finite dimensional with dimension . A more general proof can be found on wikipedia.

We start by fixing two bases of and and obtain a matrix of relative to these bases. (Each is a row matrix). Then our theorem basically translates to . We let and claim that .

Clearly if then and so so that each is orthogonal to . Hence . Conversely if then so that , i.e. following which .

Now it only remains to invoke the result for any subspace of an inner product space to conclude that . In other words .

The Inclusion Exclusion Principle is one of most fundamental results in combinatorics. It deals with counting through a seive technique: i.e. we first overcount the quantity to be enumerated, then we try to balance out the overcounting by subtracting the extra. We may or may not subtract more then what is needed and so we count again the extra bits. Continuing in this way we “converge” at the correct count.

An easy example is where are three finite sets.

Here we start off by summing the number of elements in seperately. The overcount is compensated by subtracting the number of elements in ,, . We actually compensate for a bit more then needed and so to arrive at the correct count we must add .

Our goal here is to prove the inclusion-exclusion principle and then to look at a few corollaries. We first establish two lemmas.

Lemma 1: If is any set containing elements and is any field, then the set of all functions is an -dimensional vector space over with the naturally induced operations.

Proof: Let . It is easy to see that together with the operations , defined by and for all is a vector space.

We now exhibit a basis for consisting of elements. For all let be the function defined as . Now, we claim that is a basis. Clearly it is spanning as for any we have . It is linearly independent as means that for any , with , we have , i.e. . This completes the proof.

Lemma 2: If then .

Proof: If put in the binomial theorem . If , then the sum on the left reduces to only one term: . This is clearly .

We now prove the Inclusion Exclusion principle. This theorem in its purest form, is simply a formula for an inverse of a linear operator. The theorem is as follows:

Theorem 3: Let be a set with elements. Let be the dimensional vector space of all functions , where is some field and is the power set of . Let be the linear operator defined by

Then exists and is given by:

Proof: To show as given above is indeed the inverse it suffices to show that for all .

Let . Then,

Now fix and let . Consider . Any is obtained by choosing some elements out of which has elements, and taking the union of such elements with . So for every , with , there are exactly ways of choosing a , which has elements. Any such also has elements in . So . This when substituted in the expression for shows that , which proves the theorem.

We now discuss some corollaries of the Inclusion Exclusion principle. Let be a set of properties that elements of a given set may or may not have. For any , let be those elements which have exactly the properties in and no others. We define a function such that . Similarly, for any , let be those elements which have at least the properties in . We define a function such that . It is clear that for any , and so by the Inclusion Exclusion principle we conclude that

Corollary 4: For any , we have .

In particular we have , which gives us a formula for the number of elements having none of the properties.

In the above corollary we think of as the first approximation to . So we “include” that much “quantity” in our initial count. From this we subtract all terms of the type where has just one extra element then . Thus we “exclude” that much “quantity” from our count. This gives a better approximation. Next we add all terms of the type where has two extra elements then , and so on. This is the reason behind the terminology inclusion-exclusion.

We now discuss another corollary. Let be a finite set and let be some of its subsets. We define a set of properties which elements of may or may not enjoy as follows: For any , with , satisfies the property if and only if . Also for any , let be the set of elements which have at least the properties for . Define to be . By Corollary 4, where in the second equality we correspond each subset of properties with a subset of . We summarize this as

Corollary 5: Let be a finite set and let be some of its subsets. For any , let and let . The number of elements in is given by .

A further special case is obtained by considering any finite sets and letting . Then the above corollary translates to . Considering the case of seperately, we see that . This easily yields the following corollary.

Corollary 6: Let be any finite sets. For any , let and let . Then .

Now by grouping terms involving the same size of we can restate both Corollary 5 and 6 as follows.

Corollary 7: Let be a finite set and let be some of its subsets. The number of elements in is given by.

Corollary 8: If are any finite sets then

.

Corollaries 5 to 8 are often referred to as the principle of inclusion-exclusion themselves as in combinatorial settings they are the ones most often used. A further simplified version can also be derived from them when the intersection of any distinct sets always has the same cardinality . In that case we only need to multiply with the number of such ways to select the sets to get the value of the inner sums in Corollaries 7 and 8.

Corollary 9: Let be a finite set and let be some of its subsets. Suppose that for any with there exists a natural number so that for any we have . Then the number of elements in is given by and .

This post is concerning automorphisms of graphs, which quantify the symmetry existing within the graph structure. Given two graphs and , a bijection which maintains adjacency, i.e. , is called an isomorphism and the graphs and are called isomorphic. Clearly isomorphic graphs are essentially the same, with the superficial difference between them on account of different notation used in defining the vertex set. A isomorphism from the graph to itself is called an automorphism. It is easy to see that the set of all automorphisms on a graph together with the operation of composition of functions forms a group. This group is called the automorphism group of the graph, and is denoted by .

In the remainder of this post we investigate some well known graphs and find out their automorphism groups.

The first graph we take up is the complete graph . Any permutation of its vertices is in fact an automorphism for adjacency is never lost. Its automorphism group is therefore .

The next graph is the complete bipartite graph . First consider the case . The vertices in the first partite set can be permuted in ways and similarly ways for the second partite set. Corresponding to each of these limited permutations we get automorphisms because adjacency is never disturbed. On the other hand, no automorphism can result from swapping a vertex from the first partite set and the second partite set because unless such a swap is done in its entirety (i.e. all the vertices from the first partite set swap places with the vertices in the second partite set), adjacency will be lost. A swap can be done in entirety only if which is not the case we are considering. Hence no further automorphisms can result. Moreover by the multiplication rule it is simple to observe that the automorphism group would be isomorphic to .

In the case of , we first pair off the vertices in the two partite sets against each other. This is also an automorphism, say . Now for each of the ways of permuting vertices within partite sets, an additional automorphism arises. It is obtained in this fashion: After permuting the vertices within the partite sets by the particular way we swap each vertex with its pair in the other partite set. Clearly this yields automorphisms and furthermore no more are possible. Since every element of can be written as a unique product of an automorphism collection of the type covered in counting the first ways (which is not hard to see is a normal subgroup, being of index 2) and of the subgroup so we see that the automorphism group is .

The next graph we take up is the cycle graph . Firstly note that any automorphism can be obtained in this way: A given vertex may be mapped to any of the vertices available (including itself). As soon as that is done, an adjacent vertex to has only two choices left: it can either be in the counter clockwise direction to or in the clockwise direction to . Once that choice is also made, no other choices are required. Hence we get automorphisms this way and there can be no others. Also, it is clear that two kinds of automorphisms suffice to generate this group: rotation, and swapping the notion of clockwise and counter clockwise (assuming we draw the cycle graph as equally spaced points on the unit circle; there is no loss of generality in doing that). But both these automorphisms also generate the dihedral group which also has elements. It follows that .

The final graph we take up is the well known Petersen graph. Instead of directly considering what possible functions are there in its automorphism group (although such an approach is possible) we approach the problem through the concept of line graphs.

Definition: A line graph of a graph is the graph whose vertices are in one to one correspondence with the edges of , two vertices of being adjacent if and only if the corresponding edges of are adjacent.

Lemma 1: is the complement of the Petersen graph.Proof: It is clear that if the vertices of are labelled then its 10 edges are the 2-subsets of . The line graph thus has 10 vertices, labeled by these 10 2-subsets . Two vertices are adjacent in iff the two 2-subsets have a nontrivial overlap. The complement of is the graph with the same 10 vertices, and with two vertices being adjacent iff the corresponding two 2-subsets are disjoint. But this is the very definition of the Petersen graph.

Lemma 2: is equal to .Proof: If then for any two vertices we have , i.e. , i.e. so that . The reverse implication follows by replacing by .

Theorem 3: The automorphism group of the Petersen graph is .Proof: In view of Lemma 1 and 2 it suffices to find out for the automorphism group of the Petersen graph is going to be the same. We let have the vertex set in the sequel.

Take any automorphism of . If we have two edges with , then either of two cases arise. Either or not. If then obviously and so by injectivity of we have . If then it must be that . This means that and again by injectivity we have . What this means is that the function induced by on in the natural way is injective. It is also surjective as for any clearly . Finally, this function is an automorphism since clearly implies and is implied by as there is a common vertex. As our definition of the induced function is obtained in a definite way we have shown that every automorphism of induces a unique automorphism of . Moreover, it is easy to see that if are two automorphisms then the automorphism induced by is the same as the automorphism induced by composed by .

We now show that given an automorphism of we can obtain an automorphism of which induces it in the natural way. Let . It is easy to see that the 4-cliques of originate from the stars of . So has exactly 4-cliques, say where contains 4 vertices corresponding to the 4 edges in that are incident to a vertex in . Since is an automorphism it sends 4-cliques to 4-cliques. Also, must send two different 4-cliques with to different 4-cliques, because if it sends them to the same 4-clique then a collection of at least 5 vertices is mapped to a collection of vertices, a contradiction to the injectivity of . So induces a permutation of the ‘s.

Now suppose and are two different automorphisms in . Then they differ on at least vertex in , say on the vertex . Now given any vertex in consider the intersection of the 4-cliques and . If is some vertex in then as an edge in is part of stars with centers and , i.e. . Hence the intersection contains only the vertex . Every vertex of arises in this way. So if , then either or for otherwise .

Hence every automorphism of induces a unique permutation of the ‘s. Moreover distinct automorphisms induce distinct permutations so that the automorphisms and the permutations can be put in one-one correspondence. Consider an automorphism of the vertices of where if in the permutation corresponding to . Now a vertex of . This is also the intersection of the 4-cliques and and so . This shows that induces as an automorphism.

Hence we have shown that . So the Petersen graph has the automorphism group .

As we all know, a (natural) number is called prime if its positive divisors are only and . The fact that prime numbers are infinite in number was proved by Euclid (although it is not clear whether he discovered it) and the proof is a little gem of mathematical reasoning. We present his proof below:

Theorem: There are infinitely many prime numbers.

Proof: Consider any list of prime numbers, say . The number is clearly not on the list. If is not prime, then by definition there is a prime number which divides . Clearly for any as otherwise divides , a contradiction. So is a prime number not on the list. If, on the other hand, is prime, then it is anyway a prime number not on the list. Hence our list is incomplete.

We now give another beautiful proof for the same theorem owing to Euler. Although it does not strictly qualify as a valid mathematical proof by modern standards and is really nothing more then heuristic reasoning yet its “wow factor” is so high that as an exception it is still referred to as a proof. It can be recast into a more rigorous form but we do not so here so that its original taste is not disturbed. The essential argument here is that if the primes were finite then the product would be finite which is not the case.

Theorem: There are infinitely many prime numbers.

Proof: Let denote the set of all primes. Now consider the product i.e.

by the formula of the geometric series.

Next we observe that if we multiply out the terms (assuming such a multiplication is valid) then the reciprocal of every positive integer will occur exactly once by the uniqueness of the fundamental theorem of arithmetic. If is the unique factorization of then would result in the multiplication only when are multiplied. Hence the above product

By definition an algebraic number is a complex number which satisfies some polynomial . Every rational number is algebraic as satisfies . Further irrational numbers can also be algebraic, clearly satisfies . Similarly purely imaginary numbers can be algebraic: obviously satisfies .

Just as algebraic numbers are called so because they lie within the range of the “classical algebra” (by which we understand manipulation of the integers) to be “described entirely”, a non-algebraic number is called transcendental because one needs to transcend this “algebra” to describe it. There are only a few mathematically significant numbers which are known to be transcendental. For 2500 years a debate ranged as to was transcendental or not, before the question was settled in the affirmative by Lindemann in 1882. Similarly is transcendental. (We do not know whether is transcendental or not).

Surprisingly, even though the transcendental numbers seem fewer then the algebraic numbers they actually exist in greater abundance. In fact, almost every real number is transcendental. This is because the set of algebraic numbers is countable and hence has Lebesgue measure zero. Our aim in this post is to formally prove the countable nature of the algebraic numbers.

Theorem: Let be the set of all algebraic numbers. Then is countable.

Proof: We define a height of a polynomial as . Clearly for a fixed there are only finitely many choices for and and so there are only finitely many polynomials of fixed height.

Now we make a list of all the algebraic numbers in the following way: Consider any height and for all the finitely many polynomials of this height, write down all the finitely many roots of these polynomials in the list. Keep repeating for all possible heights. It is clear that no algebraic number will be missed out in this list. This proves that is countable.