Network Theory (Part 13)

Unlike some recent posts, this will be very short. I merely want to show you the quantum and stochastic versions of Noether’s theorem, side by side.

Having made my sacrificial offering to the math gods last time by explaining how everything generalizes when we replace our finite set of states by an infinite set or an even more general measure space, I’ll now relax and state Noether’s theorem only for a finite set. If you’re the sort of person who finds that unsatisfactory, you can do the generalization yourself.

Two versions of Noether’s theorem

Let me write the quantum and stochastic Noether’s theorem so they look almost the same:

Theorem. Let be a finite set. Suppose is a self-adjoint operator on , and let be an observable. Then

if and only if for all states obeying Schrödinger’s equation

the expected value of in the state does not change with

Theorem. Let be a finite set. Suppose is an infinitesimal stochastic operator on , and let be an observable. Then

if and only if for all states obeying the master equation

the expected values of and in the state do not change with

This makes the big difference stick out like a sore thumb: in the quantum version we only need the expected value of , while in the stochastic version we need the expected values of and !

Brendan Fong proved the stochastic version of Noether’s theorem in Part 11. Now let’s do the quantum version.

Proof of the quantum version

My statement of the quantum version was silly in a couple of ways. First, I spoke of the Hilbert space for a finite set , but any finite-dimensional Hilbert space will do equally well. Second, I spoke of the “self-adjoint operator” and the “observable” , but in quantum mechanics an observable is the same thing as a self-adjoint operator!

Why did I talk in such a silly way? Because I was attempting to emphasize the similarity between quantum mechanics and stochastic mechanics. But they’re somewhat different. For example, in stochastic mechanics we have two very different concepts: infinitesimal stochastic operators, which generate symmetries, and functions on our set which are observables. But in quantum mechanics something wonderful happens: self-adjoint operators both generate symmetries and are observables! So, my attempt was a bit strained.

Let me state and prove a less silly quantum version of Noether’s theorem, which implies the one above:

Theorem. Suppose and are self-adjoint operators on a finite-dimensional Hilbert space. Then

if and only if for all states obeying Schrödinger’s equation

the expected value of in the state does not change with :

Proof. The trick is to compute the time derivative I just wrote down. Using Schrödinger’s equation, the product rule, and the fact that is self-adjoint we get:

So, if , clearly the above time derivative vanishes. Conversely, if this time derivative vanishes for all states obeying Schrödinger’s equation, we know

for all states and thus all vectors in our Hilbert space. Does this imply ? Yes, because times a commutator of a self-adjoint operators is self-adjoint, and for any self-adjoint operator we have

This is a well-known fact whose proof goes like this. Assume for all Then to show it is enough to show for all and . But we have a marvelous identity:

and all four terms on the right vanish by our assumption. █

The marvelous identity up there is called the polarization identity. In plain English, it says: if you know the diagonal entries of a self-adjoint matrix in every basis, you can figure out all the entries of that matrix in every basis.

Why is it called the ‘polarization identity’? I think because it shows up in optics, in the study of polarized light.

Comparison

In both the quantum and stochastic cases, the time derivative of the expected value of an observable is expressed in terms of its commutator with the Hamiltonian. In the quantum case we have

and for the right side to always vanish, we need latex , thanks to the polarization identity. In the stochastic case, a perfectly analogous equation holds:

but now the right side can always vanish even without We saw a counterexample in Part 11. There is nothing like the polarization identity to save us! To get we need a supplementary hypothesis, for example the vanishing of

Okay! Starting next time we’ll change gears and look at some more examples of stochastic Petri nets and Markov processes, including some from chemistry. After some more of that, I’ll move on to networks of other sorts. There’s a really big picture here, and I’m afraid I’ve been getting caught up in the details of a tiny corner.

Post navigation

6 Responses to Network Theory (Part 13)

Why is it called the ‘polarization identity’? I think because it shows up in optics, in the study of polarized light.

I always assumed it came from geometry, namely the concept of reciprocal polar: something like, if (Ax, x) = 1 is the equation a quadric, the polar form (Ax, y) gives the polar of point x as the hyperplane {y : (Ax, y) = 1}, etc. Could you give more hints about the “optics” thing? (I can well imagine a connection, but what about the historical order? Studies in optics by, say Fresnel, etc., leading to the concept of polarity in geometry, or the other way round?)

Abstract: A mathematical identity allows us to express the product of two complex light fields as a sum of the intensities of four different superpositions of the two. One common holographic scheme and one proposed by Gabor and Goss turn out to be the experimental realizations of the first term and the sum of the first two terms respectively of this identity. One scheme that we propose, using the identity, bears a strong resemblance to a proposal of Wiener for producing an operational equivalent of a nonlinear transducer.

Of course this doesn’t prove the name polarization identity came from the study of polarized light, but it fits the story I once heard.

(I haven’t bothered to read the article because it’s not freely available, so I’d need to do a little work to access it through my university library. But it could give extra clues.)

Thanks for the very clear current series of posts.

You’re welcome! I’m having fun writing them, but it’s always more fun when I hear that people are still reading them.

This is true if and are self-adjoint, but not necessarily otherwise. The argument goes like this:

If this time derivative vanishes for all states obeying Schrödinger’s equation, we know

for all states and thus all vectors in our Hilbert space. Does this imply ? Yes, if and are self-adjoint, because i times a commutator of a self-adjoint operators is self-adjoint, and for any self-adjoint operator we have

How To Write Math Here:

You need the word 'latex' right after the first dollar sign, and it needs a space after it. Double dollar signs don't work, and other limitations apply, some described here. You can't preview comments here, but I'm happy to fix errors.