Spherical harmonics are the angular part of the solution to Laplace’s equation on the sphere: they have the property that

They form an orthonormal basis in Hilbert space and have many nice properties. In particular, the space of spherical harmonics is an irreducible representation of the group of rotations SO(3).

Vector spherical harmonics are an extension of the concept for use with vector fields. We define three vector spherical harmonics,
These are orthogonal just like the usual spherical harmonics, and they allow every vector field to be expanded in vector spherical spherical harmonics.

They also turn out to be useful in the study of magnetostatics (that is, the study of static magnetic fields) as in this paper by Barrerra, Estevez, and Giraldo (Here.)

The gist: expanding functions on the sphere in spherical harmonics makes it possible to replace the Poisson equation with an ordinary differential equation. This means that if we define a charge distribution and require it to be bounded in extent, we can determine the potential outside the charge distribution. The key is canceling out the angular component to simplify the partial differential equation.

Extending this notion to vector fields allows us to do the same thing with differential equations involving the gradient (and Laplacian and curl etc), expanding in vector spherical harmonics and letting the angular components cancel. This will allow us to calculate magnetic multipoles analogous to electrical multipoles. When there is no charge, the magnetic induction field is the gradient of some function, the magnetic scalar potential..
The potential can be expanded in spherical harmonics; the coefficients are called the magnetostatic multipole moments.

I spent a 12-hour train ride through all the twilight mountains of upstate New York, reading William Vollmann’s wonderful book about riding freight trains. It is now time for me to read all his books — I believe the man is a kindred spirit.

Every time I surrender, even necessarily, to authority which disregardingly or contemptuously violates me, so I violate myself. Every time I break an unnecessary law, doing so for my own joy and to the detriment of no other human being, so I regain myself, and become strong in the parts of me that the security man can never see.

Even better — it seems Will Wilkinson agrees. And pretty much any friend of Will Wilkinson is a friend of mine.

(N.B. I am fighting a long-standing bad habit here — it is my aim not to mention politics even in the off-topic posts on this blog. Talking politics for me is the way tequila is for other people: it always seems like a good idea at the time and a bad idea in the morning. But this is literature, not politics, and therefore kosher — or so I tell myself.)

I mention this Discover article because it’s really an applied math story being popularized as a biology story. Noam Sobel and his team from Weitzmann made a database of thousands of smell-producing molecules, along with their properties (like size, or how tightly the molecules pack.) They then ran Principal Components Analysis on the data to find the critical properties, and observed that a single dimension or score encapsulated most of the variation — and even could predict which smells humans would find more and less appealing.

What I want to harp on is the use of PCA. To those of us who actually do math and science full time, it’s a pretty familiar, standard technique. It’s over a hundred years old. It’s very simple. It doesn’t even work in the difficult cases (a lot of modern random matrix theory is about identifying when PCA doesn’t work.) But I don’t think Principal Components Analysis (or multidimensional eigenvector analysis generally) gets the attention it deserves in the popular imagination. Here, given a glut of data, is a tool that can tell you what are the most useful ways to measure it. We’re used to science giving us measurements; but this is science telling us what measurements are important. Sobel’s research isn’t really a biological discovery at all; it’s a data analysis discovery.

And when you’re used to thinking in the language of dimensionality reduction, suddenly you see when it’s needed but missing. A gap, like “Oh, this really should be a dimensionality reduction problem.” For instance, I was reading some of Simon Baron-Cohen’s research about autism; for the unfamiliar, his theory is that people lie on a spectrum between “systematizing” and “empathizing” thoughts, with autistics on the “systematizing” extreme, much better at logical puzzles than interpersonal communication. I was reading his papers and I immediately thought “Something’s missing here.” If I wanted to know whether a single axis distinguished autistics from non-autistics, well, I’d look at all kinds of neurological and psychological properties, get a big cloud of data from autistics and non-autistics, and see if most of the variation was due to this empathizing/systematizing business.

Now, I’m not remotely a psychologist, but it looks like Baron-Cohen doesn’t do that. He gives out a survey, observes that autistics are more systematizing and less empathizing than non-autistics, and essentially says “Ta-Da!” And I go, “Huh?” How on earth does he know that this is the main difference between autistics and non-autistics? Where are all the correlations and variances? Now, maybe that’s standard practice in psychology. I’ve read a bit more in the social sciences, and it’s standard practice there. But to me, this kind of research is just crying out for a little data analysis. I think there are whole areas of research where people aren’t yet thinking in PCA. Maybe that should change.

More notes from Shamgar Gurevich’s lectures bringing representation theory to the applied-math populace.

We start with an example. Let . Denote the irreducible representations of G by . There are 5 of these, one for each conjugacy class (in the case of the symmetric group, a conjugacy class is a partition.)

Constructions: goes into the complex numbers over the tetrahedron, which is , where is the three-dimensional space of the front face of the tetrahedron. The representation is
Why is irreducible?

General tool: the intertwining number, .

Proposition: is irreducible iff .
Proof: .
Since these are integers, the equation is true iff there is only one subrepresentation.

Another 3-dimensional representation:
Clearly this is also irreducible.
How do we know it’s not equivalent to ?

Well, take three function defined on the triangle (all entries have to add to 1.) is 0 on both vertices a, b, is 1 on both, and is 1 on a and -1 on b.
These form a basis. The trace of in this basis is 1 + 1 + -1 = 1, and the trace of in this basis is sign(a, b) = -1. So they are not identical.

(To be continued — I’m posting from Montreal, so I don’t have long blocks of computer time.)

I’m all done!
Thesis defense seemed to go smoothly — I’m very happy with it. And this is the end of all my work in college! Now I pack and clean my room, because I’m going to Montreal tomorrow with some friends. It is literally the cheapest possible vacation, and it’s going to be awesome.
We start with a ten-hour train ride at the crack of dawn, and I have the perfect book for it: William Vollmann’s Riding Towards Everywhere, a book about … riding the rails.

Montreal! There will be sightseeing, and hostel adventures, and poutine. Then I go back to school to see Reunions and graduate; I’ll sing Old Nassau one more time and get all misty, and then it’s back home, where I will do nothing but run, lift, and read math. It’s going to be a good summer.

Here’s a new installment of lecture notes from Shamgar Gurevich’s seminar on representation theory. (This was a small group of applied math people, professors and grad students and me, looking at applications of representation theory to the problem of cryo-electron microscopy. It’s very pretty stuff and well taught.)

The Discrete Fourier Transform (DFT) is defined as
We want to find a natural basis of eigenvectors, and explain how to compute the DFT fast. Related is the following theorem:

Fix an additive character. There is a unique up to isomorphism irreducible representation such that

Let’s show first:

In particular,

To show this, let us answer.
How to compute ?

Define

Properties of intertwining numbers :

1.

2.

3.

1 is clear; 3 is proven by Schur’s Lemma.
To prove 2, observe

The result is, if
then

The application to the first theorem is that if and given a representation then

Proof:

We are now ready to prove the next theorem: the number of irreducible representations of G equals the number of conjugacy classes of G. The idea is an isomorphism between the “geometric side” and the “spectral side.”
Example:

This is an isomorphism:
Assume Then . But .
The left hand side are the functions constant on conjugacy classes of G, while the right hand side are.
In particular, the number of conjugacy classes is the number of irreducible representations.
And G is abelian if every irreducible representation has dimension 1.

Apologies for light blogging — I’ve been in finals and thesis mode. (I defend today!) I have some nice representation theory notes lined up, I promise. Meanwhile I note that two very nice math people also have brand new blogs. Brooke and Jimmy went to school at UChicago, which makes them cool. Jimmy writes a lot of pedagogical stuff, which is pretty damn important — imagine if everyone who was supposed to learn multivariable calculus actually learned it and liked it! Brooke seems to have the same quals prep plan as me (at least for the algebra/topology stuff.)