"A good stock of examples, as large as possible, is indispensable for a thorough understanding of any concept, and when I want to learn something new, I make it my first job to build one." – Paul Halmos

Archive for June, 2009

When someone linked me to Ravi Vakil’s advice for potential graduate students, I was struck by the following passage:

…[M]athematics is so rich and infinite that it is impossible to learn it systematically, and if you wait to master one topic before moving on to the next, you’ll never get anywhere. Instead, you’ll have tendrils of knowledge extending far from your comfort zone [emphasis mine]. Then you can later backfill from these tendrils, and extend your comfort zone; this is much easier to do than learning “forwards”. (Caution: this backfilling is necessary. There can be a temptation to learn lots of fancy words and to use them in fancy sentences without being able to say precisely what you mean. You should feel free to do that, but you should always feel a pang of guilt when you do.)

It’s great to hear this coming from an expert because this is exactly what I’ve been doing for the past year without realizing it. Without formally learning anything, I’ve begun extending tendrils into algebraic topology, category theory, and all sorts of subjects about which I still can’t say anything particularly intelligent. However, from my experience so far I have a tentative list of the benefits of this strategy:

It becomes easier to recognize related concepts or constructions across different subjects, hence to tie them together.

If you have a concept you don’t fully understand sitting in the back of your head, it may come to pass that once you learn the necessary tools to understand it you may re-derive the concept partially based on your memory. As Richard Feynman said, “what I cannot create, I do not understand.”

Certain things become better motivated if you can say to yourself something like, “oh, I know why we’re learning about Theorem X; it’s an instance of Phenomenon Y which has lots of other nontrivial instances.” Here I’ll give an example: Pontryagin duality.

You are naturally led to ask lots of questions, and questions are great. “This looks a lot like Theory Z,” you might ask your professor. “What’s the connection?”

The idea that constantly working outside your comfort zone is key to progress appears to me to be a general phenomenon; in two-player games and sports, for example, playing opponents who are better than you is a great way to improve.

What I’m curious about, though, is whether the undergraduate math curriculum explicitly encourages “tendril” behavior. Perhaps it’s just something every math major should be motivated to do independently, but I can’t help but think that Ravi’s advice, which I’ve never seen written down anywhere else, should be more widely acknowledged.

(A more appropriate title for this post would probably be “I hate Bourbaki,” but I like it as is.)

I spend a lot of my free time reading research papers, usually in combinatorics; those tend to require the least background. Today I decided to read everything I could find written by one of the great champions of combinatorics, Gian-Carlo Rota, and in his philosophical writings I found the explicit declaration of an opinion I’ve held for some time now.

The real line has been axiomatized in at least six different ways. Mathematicians are still looking for further axiomatizations of the real line, too many to support the justification of axiomatization by the claim that we axiomatize only in order to secure the validity of the theory.

Whatever the reasons, the variety of axiomatizations confirms beyond a doubt that the mathematician thinks of one real line, that is, the identity of the object is presupposed and in fact undoubted.

The mathematician’s search for further axiomatizations presupposes the certainty of the identity of the object, but recognizes that the properties of the object can never be completely revealed. The mathematician wants to find out what else the real line can be. He wants ever more perspectives on one and the same object, and the perspectives of mathematics are precisely the various axiomatizations, which lead to a variety of syntactic systems always interpreted as presenting the same object, that is, as having the same models.

In the previous post we used the Polya enumeration theorem to give a sneaky, underhanded proof that

.

If you’ve never seen the exponential function used like this, you might be wondering how it can be “explained.”

To explore this question, I’d like to give three other proofs of this result, the last of which will be “direct.” Along the way I’ll be attempting to describe some basic principles of species theory in an informal way. I’ll also give some applications, including to a Putnam problem.

I ended the last post by asking whether the proof of baby Polya extends to the multi-parameter setting where we want to keep track of how many of each color we use. In fact, it does. First, we should specify what exactly we’re trying to compute. Recall the setup: we have colors (represented by variables ), and we have a set of slots with acted on by a group where each slot will be assigned a color. Define to be the number of orbits of functions under the action of where color is used times. (Since the action of only permutes slots, it doesn’t change the multiset of colors used.) What we want to compute is the generating function

.

Note that setting we recover , which doesn’t contain information about particular color combinations. By the orbit-counting lemma, this is equivalent to computing

where we must now count fixed points in a weighted manner, recording the multiset of colors in each fixed point. How do we go about doing this?

We’ve got everything we need to prove the Polya enumeration theorem. To state the theorem, however, requires the language of generating functions, so I thought I’d take the time to establish some of the important ideas. It isn’t possible to do justice to the subject in one post, so I’ll start with some references.

Many people recommend Wilf’s generatingfunctionology, but the terminology is non-standard and somewhat problematic. Nevertheless, it has valuable insight and examples.

I cannot recommend Flajolet and Sedgewick’s Analytic Combinatorics highly enough. It is readable, includes a wide variety of examples as well as very general techniques, and places a great deal of emphasis on asymptotics, computation, and practical applications.

If you can do the usual computations but want to learn some theory, Bergeron, Labelle, and Laroux’s Combinatorial Species and Tree-like Structures is a fascinating introduction to the theory of species that requires fairly little background, although a fair amount of patience. It also contains my favorite proof of Cayley’s formula.

Doubilet, Rota, and Stanley’s On the idea of generating function is part of a fascinating program for understanding generating functions with posets as the fundamental concept. I may have more to say about this perspective once I learn more about it.

While it is by no means comprehensive, this post over at Topological Musings is a good introduction to the basic ideas of species theory.

And a shameless plug: the article I wrote for the Worldwide Online Olympiad Training program about generating functions is available here. I tried to include a wide variety of examples and exercises taken from the AMC exams while focusing on techniques appropriate for high-school problem solvers. There are at least a few minor errors, for which I apologize. You might also be interested in this previous post of mine.

In any case, this post will attempt to be a relatively self-contained discussion of the concepts necessary for understanding the PET.

I can’t resist mentioning a joke I heard from an episode of American Dad. Stan Smith has this to say about his training as a negotiator:

Hey, you’ve got one of the CIA’s top negotiators on your side. Y’know, I negotiated my way through negotiator training. I should’ve failed the hell out of that class. That’s how good I am.

It reminds me of some of the issues that cropped up in Scott Aaronson’s discussion of side-splitting proofs, especially Theorem 5. I can’t help but chuckle at the fact that Stan’s line gives both a lower and an upper bound on the quality of his negotiation skills!

The orbit-stabilizer theorem implies, very immediately, one of the most important counting results in group theory. The proof is easy enough to give in a paragraph now that we’ve set up the requisite machinery. Remember that we counted fixed points by looking at the size of the stabilizer subgroup. Let’s count them another way. Since a fixed point is really a pair such that , and we’ve been counting them indexed by , let’s count them indexed by . We use to denote the set of fixed points of . (Note that this is a function of the group action, not the group, but again we’re abusing notation.) Counting the total number of fixed points “vertically,” then “horizontally,” gives the following.

Proposition:.

On the other hand, by the orbit-stabilizer theorem, it’s true for any orbit that , since the cosets of any stabilizer subgroup partition . This immediately gives us the lemma formerly known as Burnside’s, or the Cauchy-Frobenius lemma, which we’ll give a neutral name.

Orbit-counting lemma: The number of orbits in a group action is given by , i.e. the average number of fixed points.