Rule 42: Any question that gets an animated gif for an answer should be closed. Seriously, "big-list" questions need a lot more justification and background to be good questions. Why are you interested in this? How will it help you with your mathematical research? What problem in your research does this connect with?
–
Loop SpaceDec 8 '10 at 21:02

3

@Qiaochu Yuan: Both questions are concerned with the relation between mathematics and physics, but they are not the same. Applying intuition from physics to math problems is definitely distinct from my question.
–
Cristi StoicaDec 8 '10 at 21:16

2

How about: the use of the renormalization 'group' (including perturbative renormalization group calculations using the epsilon expansion), in particular in the study of self-avoiding walks. It seems that very much is "known" by physicists which is not yet proven by mathematicians. I would love to see a complete discussion about this from both sides. What is the mathematical perspective on these methods? I'm not qualified to make an answer out of this.
–
Gregory PutzelDec 9 '10 at 3:53

2

@Andrew Stacey. Well, I want to gather "bug reports" from mathematical physics, then "fix" them. I can't do this alone, but fortunately I am not the only interested in this.
–
Cristi StoicaDec 9 '10 at 11:27

2

Yes, this is a very appropriate question for MO.
–
Dr ShelloDec 13 '10 at 4:50

10 Answers
10

Perhaps it would not be out of place to quote Miles Reid's Bourbaki seminar on the
McKay correspondence here:

"The physicists want to do path integrals, that is, they want to integrate
some "Action Man functional" over the space of all paths or loops
$ \gamma : [0; 1] \rightarrow Y $. This
impossibly large integral is one of the major schisms between math and fizz. The physicists
learn a number of computations in finite terms that approximate their path integrals, and
when sufficiently skilled and imaginative, can use these to derive marvellous consequences;
whereas the mathematicians give up on making sense of the space of paths, and not
infrequently derive satisfaction or a misplaced sense of superiority from pointing out that
the physicists' calculations can equally well be used (or abused!) to prove 0 = 1. Maybe
it's time some of us also evolved some skill and imagination. The motivic integration
treated in the next section builds a miniature model of the physicists' path integral,..."

Here are a few examples of non-rigor as applied to evidence for dualities:

Heterotic-Type II. In earlier times, the best evidence for heterotic-Type-II duality was a) counting the number of supersymmetries of the theory, and (b) comparing the moduli spaces.

AdS-CFT. For AdS-CFT the earliest and best comparisons were counting the so-called anomalous dimensions of various operators. To date, I think the tests are far from rigorized (and yes, this would be a great problem to make mathematically precise).

Mirror Symmetry, early days. Recall that mirror symmetry in CY moduli space came from constructing a chart of the Euler characteristics of CY complete intersections and noticing the symmetry of the chart about zero. Other non-rigorous arguments involve counting the dimensions (just the dimensions) of the moduli of purportedly mirror objects. Then there's the old compute-on-flat-space-and-let-supersymmetry-take-care-of-the-rest trick.

Low energy effective field theory. The "fact" that string theory reduces to an oft-identifiable QFT in a low energy limit is a huge source of argumentation/inspiration in string theory. Accounting for (effective) black holes helped lead to M-theory in one context, and to the microscopic description of black-hole entropy in another. One can also argue for dualities by identifying equivalent field contents in two different models.
This brings up another point.

Invariance of BPS states under perturbation. It is great to take a quantity that does not vary and evaluate it in a limit where it is easy to compute. This argument appears again and again in physics -- and also in math, of course (e.g. in the heat-kernel proof of the index theorem). BPS numbers are just that. (Of course, they do vary, and the continuity of the relevant physical parameters [numbers are not necessarily physical quantities] is what underlies interesting explanations of wall-crossing.)

I'm probably including too many that don't fit and excluding a lot that do. Very non-rigorous of me!

I think this is the classic prototype from modern physics and it's a remarkable challenge to the thesis that mathematics and applications of it to physics operate on identical postulates.Here's an example of a construction that completely lacks modern rigor and yet has been incredibly successful as a theory of the physical world. In all fairness,though-there is an ongoing attempt to put it on a rigorous basis.
–
The MathemagicianDec 8 '10 at 21:09

@Laie: Thanks. I think this is very central, in the sense that the renormalization/regularization method justifies the standard model of particle physics, but in the same time it provides a source of discord between this and gravity. The infinites resulting are used by some physicists as justification for discrete models of spacetime, which are successful in computational physics, but I find them difficult to cope with the Lorentz invariance.
–
Cristi StoicaDec 8 '10 at 22:49

2

@Andrew L: I would be grateful if you would provide more details, possible a link, about the ongoing attempt you mention. I know there are some such attempts, in particular by using dressed particles. Thank you.
–
Cristi StoicaDec 8 '10 at 22:53

4

Two comments: 1) There is a rigorous notion of integration over spaces of fields. It works just fine for a number of quantum field theories, in spacetime dimension 2 & 3. It can even be partly proven to work (see work by Balaban, Magnen, Rivassaeau, Seneor, and others) in dimension 4. 2) The Standard Model of Particle Physics itself is not just non-rigorous, but almost certainly does not exist in the sense of the previous comment. There is no continuum limit for Higgs fields.
–
userNDec 9 '10 at 4:51

2

Louigi, it means that QFTs with scalar fields are typically not asymptotically free. Some coupling becomes large at short distances and keeps you from taking the continuum limit. You need more information at short distances to define the theory. But of course there is no reason to think the Standard Model including Higgs should exist rigorously as a QFT at all energy scales and many reasons to think it does not. That's why particle theorists regard it as a low-energy effective theory and are hoping the LHC will provide some information about its short distance completion.
–
Jeff HarveyDec 9 '10 at 16:37

Maybe you should show the first and second derivatives, too?
–
Deane YangDec 8 '10 at 20:54

8

Nice, and certainly physics uses the delta function is a non-rigourous way. But... doesn't distribution theory basically put this on a pretty firm mathematical foundation? This seems to hence be a different example to the Feynmann Path Integral one-- there is a rigourous version, just it's not used...
–
Matthew DawsDec 8 '10 at 21:20

5

The OP asks "which of these techniques were eventually made rigorous?" Physicists were using delta functions long before mathematicians wrote down the rigorous theory of distributions.
–
Qiaochu YuanDec 8 '10 at 21:25

The replica method and the cavity method have been used by physicists to calculate thermodynamic quantities in various statistical mechanics settings (including quite a few classes of random combinatorial objects). The results are often exactly right, even though the method is not at all rigorous. Michel Talagrand has recently proven rigorously some of the results that have been obtained by these methods.

My favorite example of this is the use of the replica method by Mezard and Parisi in the mid-1980s to "prove" that the expected optimal value of the assignment problem (with costs chosen randomly from the uniform [0,1] distribution) is $\zeta(2) = \pi^2/6$. It wasn't until 2000 that Aldous published a rigorous proof.
–
Mike SpiveyDec 13 '10 at 3:42

The use of random matrix theory to model energy levels of heavy nuclei and other physical systems. See also the following historical piece and the pictures therein: There is striking statistical evidence that the eigenvalues of large random self-adjoint matrices, the energy levels of heavy nuclei, and the normalized zeros of $L$-functions (!) are all spaced about the same.

Another example from theoretical high-energy physics I've encountered: sometimes when physicists have some equation of motion for an arbitrary number $N$ of particles with positions $x_i$, e.g. something of the form $\frac{1}{N}\sum_i f(x_i) + \frac{1}{N^2}\sum_{ij} g(x_i, x_j) = 0$, they wish to know what the solutions to this equation look like for large $N$. A technique they use is to replace the variables $x_i$ with a probability measure $\mu$ on the space of their possible values, which is supposed to represent the number of $x_i$'s in a given region in the large $N$ limit, and instead of solving the original equation they solve the analogous equation in $\mu$, e.g. $\int f(x) \mathrm{d}\mu(x) + \int g(x, y) \mathrm{d}(\mu \times \mu) (x, y) = 0$. In fact it's not hard to come up with a toy example where the original equation can be solved exactly for all $N$ and the solutions "look like" a particular probability distribution in the large $N$ limit, but that probability distribution fails to satisfy the corresponding equation, and for that reason I have some doubt that this method can be turned into something rigorous.

Every $(x_i)_{1\le i\le N}$ which solves your first equation yields a (discrete) probability measure $\mu_N$ which solves your second equation. So what you are saying is that in a toy example: 1. the solution $\mu_N$ of the first equation is unique for every large enough $N$; 2. the probability measure $\mu_N$ "looks like" $\mu$ when $N\to+\infty$; 3. the probability measure $\mu$ does not solve the second equation. Hmmm... If "looks like" means "converges to", you might want to explain the relevant mode of convergence of measures (and/or the toy example itself).
–
DidDec 9 '10 at 7:35

1

Well, coming up with an appropriate definition of "converges to" would be one of the difficulties in making the technique rigorous, but in the toy example the solution for a given $N$ consisted of the $N^{\mbox{th}}$ roots of unity in the complex plane, and the probability distribution they "look like" was the measure uniformly concentrated on the unit circle. I don't know if there's any notion of convergence that works, but the real examples I saw were of the same form (i.e. sets of points lying at regular intervals on submanifolds of $\mathbb{R}^n$ being approximated by uniform measures).
–
Phil WildDec 9 '10 at 17:39

Indeed the uniform probability distributions on the $N$th roots of unity converge to the uniform probability distribution on the unit circle when $N$ goes to infinity--for several modes of convergence that each have a perfectly rigorous definition thank you. But could you explain the "toy example where the original equation can be solved exactly etc." which you alluded to in your post? We know what are the measures $\mu_N$ now but what are the functions $f$ and $g$?
–
DidDec 13 '10 at 7:33