In the definition of pre-quantization on a symplectic manifold $(M,\omega)$, we represent a function $f\in C^{\infty}(M)$(with Lie algebra structure) to $\hat{f}$ in the Hilbert space $L^2(M,L,\mu)$ associated to $M$. But we always assume that the Hilbert space should not be too big. But why do we assume this?. Also we consider only irreducible representations for definition of prequantization ?

4 Answers
4

OK, so here are just a few thought on this large topic of quantization. First of all, the question of irreducibility can equally well be asked for deformation quantization (as mentioned by other answers) and thus does not refer to any particular feature/problem of geometric quantization. The physical idea behind this requirement, that the obsevables should act irreducibly on a (pre-) Hilbert space, is, I guess, due to the fact that there are no nontrivial observables which can be measured in a compatible way with all others. A reducible representation would have some non-trivial commutant (that is the most resonable definition of "reducible" among many, partially inequivalent ones). And the elements in the commutant can be measured simultaneously to all other observables (as they commute with all of them).

It is simply a fact from nature that such quantities are not known.

Or, perhaps one should be more specific here: suppose that you have some non-trivial commutant. Then start measuring these particular observables, which leads in you physically realized state to certain values of them. As time evolution is believed to originate from an observable (the Hamiltonian), no physically realizable time-evolution can change the values of these special observables in the commutant. Thus for all things we can do starting from one physically realizable state, we can never reach a state in which the elements of the commutant have different values. Thus, they behave as constants and we can restrict our considerations to the subspace generated by the other observables out of the starting one. On this subspace, the observables then act in an irreducible way (by construction).

In other words: the elements in the commutant play to role of super-selection rules. Once physics is found to be in one super-selection sector (the above subsapce), we now that it will always stay there.

As a last remark: one of the big deficits of geometric quantization is that it tries to construct one irreducible representation from the beginning. This way, one will never see the super-selection structure of the theory and might miss some interesting and perhaps even the physically relevant super-selection sector. In deformation quantization this can not happen, as here you construct first the algebra of obsevables independently of any Hilbert space representation. Only in a second step, you look for representations. But now you are able to look for all of them, thus finding the super-selection structure. Surprisingly enough, it turns out that some established physical effects can be recovered this way: the Aharonov-Bohm effect is my personal favorite, it arises as different representations of the same observable algebra, each representation is irreducible and two of them are equivalent, iff the two vector potentials differ by a vector potential satisfying a certain integrality condition.

Well, the question is about motivations, and different people may gave different ones, so here is what seems to me reasonable.

First about: what quantization is? Imho, texts on geometric quantization mainly follow old-style way of thinking which is somewhat not the best one, and moreover, imho, misleading.
For me it is much more transparent to think of deformation quantization perspective.

So quantization: you have a Poisson (symplectic) manifold and algebra of functions on it.
You know that Poisson bracket is "first-order" of non-commutative product,
so first goal of deformation quantization is to construct non-commutative algebra from Poisson algebra.

So I would not say " we always assume that the Hilbert space should not be too big",
but I would say we want to solve very concrete problem: to construct this unique irreducible representation. And geometric quantization is recipe how to do it,
which sometimes work, and many times has problems, which probably can be solved by somebody in some future, but may be some of them are unsolvable in this framework.

In geometric quantization texts, it seems to me expositions does not follow this root.

Moreover - the main mistake which some geometric quantizers make is -- that they want to take algebra of functions with Poisson bracket as Lie algebra and look for its irreps.
This is absolutely misleading. It is not natural, and it is not what people do in physics
even in the simplest example of $p,q$ and canonical quantization.
It leads to artificial no-goes.

Could you recommend some literature on the relationship between geometric and deformation quantization? I think both theories have a different ansatz to circument the Groenwald/van Hove no-go theorem about a proper quantization of all classical observables. Where geometric quantization only tries to quantizise only a subset of observables, deformation quantization relaxes the commutator correspondence to $\hat{ \{f, g\}} = \frac{i}{\hbar} [\hat f, \hat g] + O(\hbar^2)$.
–
Tobias DiezJul 7 '13 at 23:06

@Tobias Sorry, I do not know the reference. I do not think that it is fair to say that "both theories have a different ansatz to circument the Groenwald/van Hove no-go". Deformation quantization is natural and mature concept, but the requirment that { } should go to [ ] exactly for all/part observables (which is sometimes taken as part of g.q.) is rather artificial.
–
Alexander ChervovJul 8 '13 at 6:48

@Urs my impression that "push-forward story" is more related to "metaplectic correction" - i.e. Hilbert space is not "half of functions" but "half-forms". This part of research does not consider a question of construction the representation on the Hilbert space (?).
–
Alexander ChervovJul 8 '13 at 7:19

In fact, on the contrary this is best known among representation theorists, who use this to study representations on their Hilbert spaces. Given a Hamiltonian G-action the push-forward is done in G-equivariant K-theory and hence lands in the representation ring. Notably Mathai-Zhang (following Landsman-Hochs) proved one of the deepest theorems in quantization in this context in full generality: quantization commutes with G-reduction (ncatlab.org/nlab/show/…)
–
Urs SchreiberJul 8 '13 at 11:33

And from this perspective, the fact that geometric quantization "cuts down" the space of sections in the hallmark of index theory: from a space of sections of some bundle we pass to just the graded kernel of some graded differential operator on these.

The requirement of an irreducible representation (and thus of a 'not to big' Hilbert space) can be motivated by mainly two points:

One natural condition on a quantization procedure is that it should respect the symmetries of the theory, i.e. symmetries of the classical description should map to quantum symmetries. This in particular requires that the map $f \rightarrow \hat f$ should be an irreducible representation of the symmetry generators.

In examples one recognizes that the Hilbert space constructed by the prequantization procedure (which by definition does not result in an irreducible representation) is bigger than what is usual accepted as 'the' quantum theory. For example take $M=\mathbb{R}^n$. Then the prequantization results in the Hilbert space $L^2(\mathbb{R}^{2n})$, which is 'twice as big' as the usual one. This is because one construct the prequantum bundle over the whole phase space and thus the wave function depend on $q_i$ and also $p^i$. (Note that also $\hat p_i = \frac{i}{\hbar} \partial_{q^i}$ and $\hat q^i = q^i + \frac{i}{\hbar} \partial_{p^i}$ does not correspond to the normal expressions except if the wavefunction does not depend on $p_i$).