Tuesday, February 11, 2014
... //

Set theory, cardinals, ordinals, unmeasurable sets, and other pathological mathematical structures have no legitimate power in physics

Laymen (e.g. postmodern philosophers) interested in spirituality and physics (...) often talk about things like the "influence of Gödel's theorems about incompleteness on physics" and similar things. They usually want to believe that this theorem must imply that mathematics and science must be limited, leaving the bulk of the human knowledge to witches, alternative doctors, ESP experts, dragons, priests, and global warming alarmists, among related groups of unscientific charlatans.

A cardinal, Czech Catholic Boss Dominik Duka, is in the middle. He's now a fan in Sochi. Who believes in Christianity, may be helped; who doesn't, isn't hurt. ;-) At least that's what Miloš Zeman, the current Czech president (man on the right in the picture; just having virosis which has been misinterpreted as his being drunk) and the self-described clumsy mascot of the Czech athletes (now also in Sochi), said.

With their restricted resolution, "Gödel's theorem on imncompleteness" seems to be the same thing as the "Heisenberg uncertainty principle". However, the truth is very different. The mathematical insight by Gödel has no relationship to the Heisenberg uncertainty principle and none of the two imply that the laws of Nature cannot be pinpointed precisely, anyway.

When it comes to the irrelevance of Gödels theorems for physics, the truth is actually much more far-reaching. None of the major developments in the post-Cantor efforts to axiomatize mathematics and set theory has any implication for physics. This is partly related to physics' being fundamentally continuous. I want to dedicate this blog entry to this irrelevance.

An hour ago, someone asked the Physics Stack Exchange question about the seemingly inconsistent cardinalities of bases in quantum mechanics for the 17th time which would convince me to write this blog post.

German mathematician Georg Cantor was one of the people who dreamed about a solid, axiomatic framework for all of mathematics. He would become the founder of "set theory" which seemed to be the right thing. It would allow you to construct (=it would postulate the existence of) sets and sets of sets and so on, for any property that the elements satisfy, and through these constructs, you may encode the inner patterns of any mathematical object or structure.

Notoriously enough, philosopher Bertrand Russell realized that Cantor's axioms imply the existence of the following set \(R\):\[

\] We may ask whether \(R\in R\). Well, if \(R\in R\), then it doesn't obey the defining condition for elements of \(R\) – the condition is \(x\not\in x\) and we are substituting \(x\to R\) – so it follows that \(R\) isn't included into \(R\) i.e. \(R\not\in R\). And vice versa, if \(R\not\in R\), then it does obey the defining condition \(x\not\in x\), so we must include \(R\) into \(R\) and we have \(R\in R\). To summarize, \(R\in R\) is true exactly if it is false and vice versa. This incarnation of the liar's paradox proves that Cantor's axiomatic system was internally inconsistent.

Mathematicians would have to find a better, internally consistent axiomatic system. They would deal with this problem in two major ways. The Zermelo-Fraenkel axioms of set theory would restrict which sets you may construct (sets whose existence is guaranteed by the axioms). You may only build them from the bottom up (by constructing finite or countable unions, sets of all subsets of existing sets, and taking subsets of existing subsets picking elements with a property), so abstract "sets of all sets" are not allowed. It's enough to construct all mathematical structures we need and Russell's paradox is avoided because at most, you may define \(R\) as all elements from some pre-existing large set of objects (which doesn't include this \(R\) itself) and it's therefore clear that \(R\not \in R\) for this "restricted" \(R\). This non-membership doesn't imply that \(R\) has to be included into itself because the candidates for the \(R\) membership only come from a previously constructed set.

Alternatively, Gödel and Bernays would pioneer another set theory that allows you to construct \(R\) with all sets obeying \(R\not\in R\) but it wouldn't guarantee that this object is a proper "set". Instead, it is a more informal "class", one that isn't automatically allowed to be incorporated into other "classes" as an element. "Sets" are by definition those classes that belong to other sets. Russell's paradox is avoided in this approach, too. In fact, Russell's argument above is reinterpreted as a proof of the innocent statement that the class \(R\) isn't a set – so it isn't among the candidates for \(R\) membership. Analyses of these subtleties would also lead to Gödel's insights about incompleteness and many other things. They're sort of cute but physics doesn't care about any of this high-grade recreational logic.

Uncountable sets

Georg Cantor used to be keen on one-to-one maps between the sets. If there exists a one-to-one map between elements of \(M\) and \(N\), then these two sets are "equally large". More technically, they have the same "cardinality". The adjective "cardinal" is something like "fundamental" or "important" but I do think that the terminology really followed the cardinals in the Catholic Church and their relative ranks. Relatively to a mortal believer or infidel, such cardinals may be infinitely powerful but one cardinal may still be more powerful than another cardinal. For finite sets, the cardinality is simply the number of elements, so it may be given by non-negative integers, \(0,1,2,\dots\).

You might think that the cardinality of all infinite sets is \(\infty\) because they're equally large. But Georg Cantor was the first one who found an argument that this ain't so. For example, assume that the real numbers \(x\) obeying \(0\leq x \lt 1\) are countable. By the one-to-one-map rule, it means that we may label these numbers with integer labels. For example, the labeling may look like this

Fine. Now, Cantor's argument continues as follows. Pick the 1st digit after the decimal point from the first line (labeled by 0), the second digit from the second line (1), the third digit from the third line (2), and so on. So pick the digits on the diagonal. In my example, you get 204...

Now, change all these digits. For example, add one to all of them, modulo ten. So you will get 315... in my example. Construct the number

?: 0.315.....

It's easy to see that this number isn't written on any line of the previous "numbered list of real numbers from that interval" simply because it disagrees with each number, i.e. the number on the \(N\)-th line, somewhere, for example in the \(N\)-th digit after the decimal point. It disagrees by construction. So some real numbers are inevitably left as "uncounted". The interpretation is that the number of real numbers is greater than the number of integers. We say that the real numbers \(\RR\) are not "countable".

Cantor was often called a "Jew" but no clear evidence of his Jewish ancestry has ever emerged. Nevertheless, what we do know is that he introduced a Hebrew letter "aleph", the first letter of their alphabet, into mathematics. The cardinality (generalized number of elements) of the integers \(\ZZ\) would be called \(\aleph_0\) while the cardinality of the real numbers \(\RR\) would be called \(\aleph_1\). There are also "larger" sets with cardinality \(\aleph_n\). The famous Continuum Hypothesis states that there is no cardinality in between \(\aleph_0\) and \(\aleph_1\).

(The cardinal numbers shouldn't be confused with ordinal numbers which label inequivalent well-ordered sets – sets such that each subset has a minimum element. The ordinal number "analogous" to \(\aleph_0\) is known as \(\omega\) but \(\omega+1\) is actually a different one and the rules to construct and distinguish larger ordinals differ from the rules for cardinals.)

All of this is maths is fun and really follows from some rules of the game. However, these rules and their implications are utterly unnatural from the viewpoint of physics. For example, the cardinal \(\aleph_1\) is called the "continuum" and it quantifies the number of elements (points) in \(\RR\) as well as any \(\RR^n\). The reason why \(\RR\) and \(\RR^n\) have the same cardinality is rather simple. You may "compress" \(n\) real numbers into one simply by taking the digits from all these \(n\) numbers in an alternating fashion. So \((2.7182818,3.1415926)\) may be "compressed" as a single real number \(23.71148125891286\dots\). Do you understand the rule? Needless to say, such a compression of a 2-dimensional plane into a real line is completely discontinuous and contrived from a physics viewpoint. No physical application may justify such an insensitive manipulation. But from a strictly mathematical viewpoint, it is a proof that all kinds of the continuum have the same cardinality.

The bases produce the same Hilbert space

The very claim that the continuum has a larger cardinality than the integers – that the real numbers are uncountable – is controversial from a "more moral viewpoint". Cantor's diagonal proof is OK but the "counterexamples" showing that the numbers are not countable are completely contrived numbers you will never experience in physics. The individual precisely known numbers that matter are countable. Using a more modern terminology, all finite computer files are countable: you first sort them according to the length in bytes; and then you sort the files with the same length alphabetically or lexicographically; then you assign integers to the ordered list of computer files. Similarly, each of the "constructible real numbers" may be uniquely identified by a finite computer file or a finite sequence of words and/or mathematical symbols, and such finite sequences are countable. So the elements proven to be "uncountable" by Cantor's argument may be considered "pure junk that can never matter".

The quantification of the dimension of Hilbert spaces is another major argument showing that the obsession with the "different infinite sets' having different cardinalities" is physically misguided.

The point is that some Hilbert spaces admit both countable and uncountable bases assuming the most natural yet "generalized" definition of an uncountable basis you may imagine.

In a previous Physics Stack Exchange question, I would choose the Fourier series as an example. But pretty much every quantum Hamiltonian gives you an example to show my point= – not to mention infinitely many other operators. Let me talk about the quantum harmonic oscillator here.

Consider the space of \(L^2\)-integrable complex functions \(\psi(x)\) of a real variable \(x\in \RR\) – wave functions for a particle on the line. We demand\[

\int_{-\infty}^{+\infty} |\psi(x)|^2 \dd x \lt \infty

\] to make the norm finite; that is what we called "square-integrable" or \(L^2\). More precisely, we want to consider the space of equivalence classes where two functions \(\psi_1(x)\) and \(\psi_2(x)\) are considered the same if they only differ in values at countably many values of \(x\) – or, more generally, at a set of values of \(x\) that has measure zero. This identification has to be done because the difference \(\psi_1(x)-\psi_2(x)\) of such two functions is "zero almost everywhere" and its inner product with any other \(L^2\)-integrable function is zero (the inner product is computed by the integral). So \(\psi_1,\psi_2\) predict the same probabilities (probability amplitudes) for everything; they are physically indistinguishable.

Now, it is natural to say that the space of such functions or their innocent equivalence classes morally "is" generated by the basis of continuum basis vectors \(\ket {x_0}\) for \(x_0\in RR\). The wave function corresponding to such a basis vector \(\ket {x_0}\) is \(\delta(x-x_0)\). Well, this basis vector isn't really \(L^2\)-integrable so it is not in the Hilbert space. But it belongs to an extended, "rigged" Hilbert space. At any rate, it is extremely natural to work with these "eigenvectors of these \(x\) operators". They are not normalizable but they're as analogous to the normalizable eigenstates of operators with a discrete spectrum.

By a linear combination of these vectors, we mean the integrals \[

\int_{-\infty}^{+\infty} \dd x\,\varphi(x)\ket{x}

\] rather than the sums. The integrals simply "are" the right linear operations to deal with continuously many values.

Because there exists a basis vector \(\ket{x_0}\) for any, continuously adjustable \(x_0\in\RR\), the basis – if you agree that it is "morally correct" to call it a basis – is continuous. The cardinality of this basis is \(\aleph_1\). Nevertheless, the vector space it generates is the same Hilbert space you may also obtain from a countable basis. Consider the Hamiltonian\[

H = \frac 12 x^2 + \frac 12 p^2,\quad [x,p]=i

\] Well, we could consider any other operator with a discrete spectrum too but this one is perhaps the simplest one. The eigenstates of this "harmonic oscillator Hamiltonian" are the wave functions\[

\psi_n(x) = C_n\cdot H_n(x) \cdot \exp(-x^2/2)

\] where \(C_n\) is a normalization factor, \(H_n\) are degree-\(n\) "Hermite" polynomials, and the last factor is the universal Gaussian (the ground state wave function). In this notation capturing the mathematical essence of the problem, I have pretty much used units with \(m=\omega=\hbar=1\) to describe any quantum harmonic oscillator. The eigenvalue of \(H\) in the state \(\psi_n\) is \(n+1/2\).

Now, every complex \(L^2\)-integrable function may be written as a linear superposition of the wave functions \(\psi_n\) above. The right coefficients may be computed as simple inner products; the linear superposition with the coefficients computed in this way may differ from the original wave function at most at points whose measure is zero.

So these energy eigenstates obviously form another basis of the Hilbert space and it is a countable one.

For finite-dimensional Hilbert spaces, you may see that the Hilbert spaces \(\CC^m\) and \(\CC^n\) are only equivalent if \(m=n\). Because these exponents quantify the "cardinality of the bases", you could think that even for the infinite sets, the cardinality will matter. But if you use a physically natural definition of the "continuous bases", the countability of the basis doesn't seem to matter. You get the same Hilbert space of \(L^2\)-integrable functions of a real variable.

(The unphysical, set-theory-preferring axiomatic approach will deny that what we call the continuous basis is a basis at all. It will ban wave functions that are distributions and it will prevent you from considering an integral with the integrand containing some state vectors to be their linear combination – only sums are OK. Once this philosophy imposes the sums onto you, you will produce "inseparable" vector spaces that are "pathologically, too large" and therefore unphysical. Such a treatment will find bureaucratic loopholes to outlaw everything we do in physics. But a physicist will know that the integral defines as good a linear superposition as a sum; and continuous spectra and their eigenstates and eigenvalues are as good as the discrete ones and should be treated analogously unless you want to violate the spirit of quantum physics!)

One shouldn't really be shocked that the cardinality of infinite sets doesn't matter in physics. The irrelevance really boils down to the uncertainty principle, the essential feature that produces all the qualitatively new phenomena in quantum mechanics if you look at the problems from a proper perspective. Am I not saying that the uncertainty principle is the same thing as the Gödel-related stuff in mathematics? Am I not repeating the claim that I have attacked at the beginning? Have I lost my mind?

No, I am not. What I mean is something else. Recall that Cantor has defined two "equally large sets" or "sets of the same cardinality" to be two sets such that a one-to-one map in between them exists. A one-to-one map may be thought as a relabeling. If there is a one-to-one map between two bases, the transition matrix \(\langle i| \alpha\rangle\) between these two (Latin, Greek) bases is equal to a "permutation matrix" of a sort. Most of the entries are zero. In each column and each row, there is one entry that is equal to one.

But this permutation matrix is far from being the only allowed (or most general) type of a transition matrix between two bases. Generically, a basis vector in one basis is a general superposition of all basis vectors in the other basis. Almost all the inner products may be nonzero. For example, the inner products\[

\langle x| n \rangle =\psi_n(x)

\] may be interpreted as a transition matrix between the continuous basis of \(x\)-eigenstates and the countable basis of \(H\)-eigenstates; here, \(x\) labels the row while \(n\) distinguishes the columns. Cantor's diagonal proof has only discussed one-to-one maps i.e. very special class of matrices that was only allowed to "permute the eigenstates" (elements of the basis). However, what we care about in quantum mechanics is whether the whole Hilbert spaces produced by the bases are the same. So arbitrary linear combinations are OK – even when we switch from one basis to another.

And Cantor's proof hasn't "excluded" such more general transition matrices. When treated naturally from a physics viewpoint, such an exclusion would be wrong. In fact, the transitions between bases whose cardinality is \(\aleph_0\) and bases whose cardinality is \(\aleph_1\) are possible. You could even say that because of its restriction of maps to permutation matrices and one-to-one maps, Cantor's diagonal proof (if applied to Hilbert spaces) has de facto incorrectly assumed classical physics. And that's too bad a mistake.

To summarize, it is possible to write down an axiomatic framework that will justify the statements that the discrete and continuous bases are different and the cardinality does matter. But the detailed assumptions and "bans" behind the axioms are – much like their conclusions – utterly unnatural and "morally wrong" from a physics viewpoint.

Physics naturally wants us to think about the infinite sets very differently. For some purposes, the number of points in \(\RR\) and \(\RR^n\) should be considered "different" in physics because all one-to-one maps between the two sets are heavily discontinuous and therefore "de facto disallowed" in physics. On the other hand, when the elements of \(\ZZ\) and \(\RR^n\) label bases of a Hilbert space, the Hilbert spaces may be equally large.

None of these "opposite" conclusions we prefer in physics means that mathematics has been invalidated. Instead, the different paths that physics takes show that some of the detailed features of the axiomatic systems in mathematics were simply a wrong mathematics for physics. This is the ultimate observation that many people still fail to understand.

Mathematics is fine but it is a rigorous game with man-made axioms and rules. Nature can't guarantee, doesn't guarantee, and usually explicitly denies that the first axiomatic systems that you develop to deal with some question are the "perfect ones" for physics. In particular, when it comes to the right treatments of "infinite sets" – analogously to sums of infinitely many terms, like the \(1+2+3+4+\dots = -1/12\) discussed in the recent revival of the topic – Nature and physics simply prefer (and push us towards) a very different way of thinking about all these structures and matters. I mean very different from those that may become "standard" among those mathematicians who are disconnected from modern physics. The different conclusions of physicists' mathematics (relatively to mathematicians' mathematics) doesn't mean that physicists' mathematics is inevitably non-rigorous. Instead, it means that it needs different axioms than those picked by mathematicians.

If I were a politically correct opportunist, I could say that the culture of mathematicians (with their cardinals, ordinals, theorems on incompleteness, disrespect for continuity, semi-bans on integration, and the proliferation of inseparable Hilbert spaces that follows from that etc.) and the culture of physics (with their operators boasting discrete, continuous, or mixed spectra, complete embrace of integration, continuity, unified treatment of operators with discrete and continuous spectra etc.) are simply two different, inequivalent ways to formalize certain (or superficially related) mathematical concepts and to define rules they have to follow.

But I am not a politically correct opportunist so I will tell you the actual truth. It is the physicists' perspective on these issues that is vastly superior and more profound even from a mathematical viewpoint – simply because it's the perspective that has already undergone some actual tests and that has been forced to improve by the tests – tests of abilities to describe Nature and to provide a deeper and more general internally consistent system of mathematical thought that can do so. Physicists' evolving formalization of these axioms and mathematical structures is superior over the mathematicians' formalization in a very analogous way in which the newest iPhone is or Nokia Lumia 1520 better than Alexander Graham Bell's first device: unlike Bell's gadget, the iPhones and Nokias Lumias have already witnessed some improvements.

It's the physicists' approach to the notion of infinity, infinite sums, infinite bases etc. that is the deeper one, more likely to be related with future important discoveries. When the mathematicians' verdict disagrees, it's because this mathematicians' approach is just a package of dirty bureaucratic tricks to defend a conclusion that is morally invalid (i.e. that are likely to lead to invalid conclusions if you consider them far-reaching and valid in a "deep" sense). And following these bureaucratic tricks too strictly may only lead to one outcome – to become a slave of these man-made bureaucratic rules, to produce wrong claims about physics (and about the kind of mathematics that matters in physics), and to miss the actual mathematical wisdom that is known to Nature, that was needed to create something more profound than ourselves, namely the Universe.

In plain English, mathematicians (and their collaborationists pretending to be physicists, like Emilio Pisanty) who will reject the assertion that the cardinality of infinite bases in quantum mechanics doesn't matter simply suck. They're superficial bureaucrats who simply prevent others from getting to deeper levels of the truth.

snail feedback (35)
:

reader
SteveBrooklineMA
said...

Someone recently posted a question to a science board about Lebesgue integration vs plain old Riemann integration for applications. Is it really useful? I had to answer a mischievous no. Lebesgue integration is great for proving general theorems, but in practice (engineering for example) it's not that big a deal. Piece-wise continuous is pretty much the most radical sort of function you will have to deal with in real life. I think a devil's advocate (or troll?) could make a case that continuous probability distributions, generalized functions (Schwartz distributions), and even countable bases are not really necessary. They can make things a lot cleaner when formulating the theory, but you can get pretty far working with discrete subsets of R^N... or Q^N!

Something as simple as the 3-body problem in classical physics being so complicated to solve should be all you need to know about the divergence between reality and the correlates of experience that you write down in short-hand with mathematics.

You take *on faith* (believe with no evidence) that the physical world can be described precisely using mathematics. I think the evidence proves otherwise.

Dear Steve, it depends which "applications" you mean. If you talk about direct recipes for numerical integration - and the rest of your comments suggests that you are - then you may say that the Riemann integral is the golden standard etc.

But this is really the most stupid applications. When we really care about the difference between the integrals, it is when we try to decide which of these definitions stands behind the integrals that enter fundamental physics laws, and so on.

For example, in quantum mechanics, we definitely want the inner product of wave functions of a finite number of dimensions to be the Lebesgue integral (of their product) because it's this integral that implies all the simple isomorphisms between the Hilbert spaces, and so on. Due to the subtle differences, the Riemann integral doesn't. Some elements of the Hilbert space may fail to have the Riemann integral of |psi|^2, for example, but they're perfectly fine elements and they have integral of |psi|^2 of the Lebesgue type - they are L2-integrable.

In practice, a physicist doesn't really care. It's the same integral - without adjectives. All formally valid rules to calculate the integrals may be used. Whenever some definitions of the integrals fail to exist and others do exist, we pick those that do exist. And so on.

Minkowskian Feynman path integral means neither Riemann nor Lebesgue. Both of them are ill-defined for such functional applications. But we still mean an integral as a construct that obeys all the identities expected from the other types of integrals - and we often use the identities to actually evaluate these integrals, so in this sense, the identities may be viewed as a part of the definition of the right integral.

I don't really understand why your idea about "real life" assumes that functions are no more complicated than "piecewise linear". It looks like a particularly stupid approximation scheme to me. Real-world functions are smooth and their integrals may often be calculated directly, analytically, without stupid approximations such as those to piecewise linear functions.

It looks even more pathological if you try to work with discrete subsets of R^N while integrating. Discrete subsets are of measure zero so you may change the values at all these points without actually changing the integral. Physics-relevant and even engineering- and otherwise real-world-related functions ready for integration are smooth ones, not piecewise linear and not vanishing (or otherwise becoming irrelevant) except for a discrete subset.

ℵ₁ is defined to be the next cardinal after ℵ₀. The continuum hypothesis says that ℵ₁ is the cardinality of the continuum; which is denoted 𝖈.

Cantor's second diagonalization shows that 2ˣ>x for any x. Incidentally, for infinite values of x, 2ˣ = x!, where x! is interpreted as the cardinality of the group of permutations of x items. ℵ₁ was 2^ℵ₀, which is afaik the only way to get a larger infinite cardinality from a smaller one.

OK, it doesn't imply the negative, no, but in general I think the claim that mathematics is anything more than a "useful" description, i.e. that there's a fundamental link between physics and mathematics (in a Platonic sense), hasn't been demonstrated.

I think worse, the constraints of mathematics also constrain your physical thinking; if you can't express it mathematically, it isn't "physical" in this sense. This is one of the reasons "things in themselves" will forever be inaccessible. You can only describe things as they appear with mathematics.

Perhaps you're a positivest? I'm not sure. All I know is that I don't know and the more mathematics and physics I read, the less I know.

I am sympathetic to Penrose's arguments on Consciousness though and it's through those that I first read about the Gödel argument you make above.

Well, mathematics constrains our (some people's) thinking and that's exactly why we appreciate it so much! Those who are not constrained by mathematics are reduced to believers in witches, ESP phenomena, or – to use your example – Roger Penrose's crackpot papers on consciousness (although he actually tries to use some mathematics even in *these* papers).

I don't think that your claim "no link between physics and mathematics in the Platonic sense has been demonstrated" may be interpreted in any way that isn't just plain stupid.

The Platonic sense of mathematics is, by definition, the body of mathematics including things that are not relevant in the world around us - pieces of mathematics that exist independently of us, our environment, and our thinking about it. So most of the Platonic mathematics is, by definition, disconnected from physics.

On the other hand, the part of the mathematics that is connected to physics is demonstrably connected to physics because of the evidence that supports the mathematics-based physical theories!

The biggest difference lies outside physics. The usefulness of Hilbert space rests on the inner product, but this is a very restrictive topological condition. Relaxing things to locally compact Hausdorff spaces leads to a rich theory of measurable functions and, as Kolmogorov first realized in ‘33, this is the correct setting for theoretic probability. There, instead of series of orthogonal functions, one studies series of independent functions and convergence of martingales. Few people, even technical types, appreciate martingales but they are a perfect model for much of the randomness in the world. I personally know someone who became a billionaire because of his skill with these objects (his firm manages some accounts for Sweden and Switzerland).

Glad to see you treating confused formalist ivory tower mathematicians for what they are; people whom we shouldn't expect to have anything profound to say about physics; or how the mathematics they use should be structured.

Now when you realize that there is nothing profound about the diagonal argument, but that this is simply a direct result of the (completely unmotivated) choice to give one-to-one mapping axiomatic status, over the equally begging-to-be-axiomatized fact that things which are constructed from eachother by adding or substracting stuff, cant be equally big in any meaningful sense of the word (hello tarski paradox and related unphysical downstream conclusions of denying axiomatic status to this fact), you are well on your way to realizing the logical ugliness of infinity and the continuum.

I know you don't like to hear it, as you feel it eats away at the justification for your beloved string theory, but I will remind you anyway: you have never seen an infinity, and you will never see or otherwise demonstrate the infinite divisibility of anything. That does not disprove the concept any more than it disproves russels teapot, but this is something to pause on as a supposed empiricist. It would sure be conceptually nice if we could do entirely without infinity, strings or not.

Infinity-crusaders should do well to read up on the attitudes of late 19th century physicists whom got famous working on continuum mechanics, and who were sure that this whole atomism bullshit was a stupid fad. Surely, their empirically impeccably validated equations on the propagation of shock waves and such were not just approximations to an underlying discrete mechanism? Burn the heretics!

Of course when mathematicians say that R^2 and R are "equally big" they mean it in a particular sense which is not relevant for physical considerations. But the fact that R^2 is actually "bigger" than R is also well understood by mathematicians. You just need to appreciate that R^2 and R carry a natural topology, which makes it impossible to fit on inside the other. I feel like some of your critiques are like this. Mathematicians study many structures, but only a subset are relevant for thinking about the physical world, hardly a surprise.

That being said, there are some results in mathematics, especially concerned with infinite sets, which make you wonder if mathematicians have forgotten what it means to make a meaningful statement, even about mathematics (much less physics). My favorite example of this is the so-called Hamel basis for Hilbert spaces. Take for example the Hilbert space of wavefunctions in an infinite potential well. Or if you like, the space of square integrable functions on a line segment. There is a theorem which says that the Hilbert space has a basis, called a "Hamel basis," such that every wavefunction can be written as a superposition of a finite(!) number of basis elements. Stop to think about how implausible that sounds. At least to me it seems obvious that there is no such basis, and mathematicians have lost track of what it means for an object to exist.

I have been saying exactly, down to the wording, what you say in your first two sentences for many years. Unfortunately, hoping the Obama administration will do anything coherent or useful is on par with wishing for visits from the tooth fairy.

Hello fellow blogger! Your posts are amazing; I love reading them. You are such a great writer, and character of the places you visit so well. Keep up the great work! You have surprised after visit this site -http://www.bookfari.com/Category/10/Education-and-Reference

Yes, for *most* purposes the "position basis" works just like a basis, but determining the dimensionality of the Hilbert space is not one of them. For a "true" basis each basis vector corresponds to a degree of freedom, so the dimension of the space equals the size of the basis. But in an L2 space, individual points don't "count". Functions that differ on a set of measure zero are identified. It takes (uncountably)infinitely many "basis" vectors to make a degree of freedom. So it shouldn't be surprising that the dimension of the space is less than the number of points.

There *are* continuous maps from R to R^n (they need not be one-to-one to show that the number of points is the same because there are also continuous maps from R^n to R). Space filling curves are not "heavily discontinuous" (whatever that means, being continuous is like being pregnant). Of course the functions are not smooth, but physics is full of non-smooth functions (Brownian motion, most of the paths in a path integral).

You may not feel like you need to make these "fine" distinctions (and perhaps the universe doesn't either), but the *mathematicians* that figured out the different ways that divergent sums can have values, including 1+2+3+4+5+...=1/12, most certainly did.

Astute point. Constructivist mathematicians usually put the number of theorems relying on the axiom of choice (or its equivalents Zorn’s lemma, Tukey’s lemma, etc.), either directly or indirectly, in a subject like operators on a Hilbert space at about 50%. A foundation stone of functional analysis is the Hahn-Banach theorem and it relies on Zorn’s lemma, and the existence of a Hilbert space orthogonal basis in cases involving a continuous spectrum likely does also.

The mathematical insight by Gödel has no relationship to the Heisenberg uncertainty principle and none of the two imply that the laws of Nature cannot be pinpointed precisely, anyway.

The second statement is correct, but there is a sense in which the incompleteness theorem is similar to the uncertainty principle.

Once upon a time, physicists believed that they "physics is about what is out there", but the uncertainty principle made it clear that "physics is about what we can say", which is different (as you explained very clearly in the post immediately before this one).

Mathematicians once believed something like "mathematics is about what is necessarily true", but the Gödel showed that "Mathematics is about what you can prove", which is different.

Does that mean that the incompleteness theorem is relevant to physics? No, of course not. Is it now? No. Will it ever be? Damned if I know.

Correct.2^ℵ₀ is the cardinality of the continuum. It is the smallest cardinality that ZFC can prove to be bigger than ℵ₀, but ZFC can't prove that there are none in between either. The CH is that there isn't, and therefore ℵ₁ = 2^ℵ₀

Indeed mathematicians (I am one myself) mean things in a particular way; but that does not mean you shouldn't call them out on their bullshit.

An infinitude of points being unable to span a line, but an infinitude of lines being able to span a plane, is in no way or shape a profound insight; it is a spurious consequence of ivory tower mathematicians picking the axioms they feel are most elegant, rather than those axioms which best integrate with out body of synthetic knowledge.

One may dismiss things like the Tarski paradox as curiosities with no relevance to any experiment; but to have to do so is a sad state of affairs. Ideally, our abstract reasoning should allow us to generalize to previously unobserved situations, and allow us to correctly predict them. That we don't go around looking for actual spheres to cut up and reassemble into two identical copies, is an indication that something is subtly yet horribly broken. Ideally, we would want our deductive results to be both true AND meaningful. Mächtigkeit my ass, Cantor. Stop avoiding the question: how big is that set?

Just why would it be "nice" if we could do away with infinity, or the axiom of infinity?Are you a strict or ultra-finitist? Do you reject real numbers or the Cauchy and Dedekind constrution of them? Induction?Why constrain math to be less than what it is?" No one shall expel us from the paradise that Cantor has created for us."--David Hilbert

I am not a strict finitist at all; I use 'real' (funny name given no one has ever observed them) numbers and related computational hacks in my work on a daily basis.

What I reject is the notion that someone has given a satisfactory and non-contradictory axiomatization of these concepts.

David is entitled to his opinions, but these are my observations:

1: Infinity is not an internally consistent concept. Chopping away at your axioms without regard for what your terms mean until they appear to be consistent, should abhor a physicist (and the epistemology of guys like cantor and Hilbert should make them sick to their stomachs, generally). All physicists not thinking about how to perform experiments to double spheres, implicitly agree.2: This does not mean infinity cant be a useful tool. What it does mean is that explosive logics are long overdue for being banished to history books. No logic with well defined semantics for symbol lookup is explosive. Infinity may not be consistent; but so what?3: Its lack of internal consistency may be of little pragmatic concern, but makes my inner physicist a little skeptical that it should be involved in a fundamental description of reality. But that's just my metaphysical preconceptions, so feel free to agree to disagree.

I generally try to avoid commenting on mathematical issues here, but as this comment seems to me a harmful piece of misinformation( and matters are not improved by the author calling himself a mathematician) I have decided to make make an exception to this rule.

Whether the Banach-Tarski paradox is directly relevant to physics or not matters not one iota to mathematicians; the important thing is that the paradox had a huge impact on mathematics (and some of that has relevance to physics). Among other things, the paradox had a big impact on group theory leading von Neumann to introduce the concept of amenable groups (http://en.wikipedia.org/wiki/Amenable_group ), which are basically groups that do not admit analogous paradoxes. In particular, the groups of isometries of R^1 and R^2 are amenable, so there are no paradoxes in these dimensions. The group of isometries of R^3 is not amenable, of course.

The Banach-Tarski paradox has been used to justify the rejection of the axiom of choice, since indeed it can be shown that without it non-measurable sets do not arise. Yet, today there are practically no mathematicians who reject the AC. The reason is that AC allows one to prove many theorems that are very had to prove without it. Many of them do not require the AC and in many cases it is not known if the AC is actually needed, or whether a proof not using the AC has simply not been found yet. In particular, when Banach and Tarski obtained their paradox, they also proved that it could not occur in R^1 and R^2. Their original proof of this fact (in 1924) used the AC and for quite a long time it was not known whether the non-existence of the paradox in R^1 and R^2 could be proved without AC. Finally such a proof was found by Morse in 1949. So now it is known that measure-theoretic paradoxes do not arise without AC. But this does not mean that other, similar paradoxes do not. First of all, there is the very simple and purely constructive Sierpiński-Mazurkiewicz paradox:

http://www.math.hmc.edu/funfacts/ffiles/30001.1-2-8.shtml

This involves countable sets so there is no “measure theoretic” paradox, but there is a paradox nevertheless. Another example, is even more interesting because you can actually see it in action:

http://demonstrations.wolfram.com/TheBanachTarskiParadox/

(You need to install the free Wolfram CDF Player to view this). This time the paradox takes place in the hyperbolic plane (unit disc), the Axiom of Choice plays no role and only Borel sets are needed. Since the hyperbolic plane has infinite measure there is no “measure theoretic” paradox (we simply see that 2 infinity = 3 infinity = infinity) but certainly it does look paradoxical to our eyes. Whether this is “physical” or not is not my concern or that of almost any mathematician. As for the reference to "bullshit", I see it as a kind of boomerang that missed the target and hit the thrower.

Your article is saying nothing about Gödel's theorem (it is different from Cantor's one), so your claim of its irrelevance for physics is completely unjustified. The core part of your post is about cardinality of Hilbert space (nothing to do with Gödel). Even here the fact is that most physics is based on separable Hilbert space and, that holds true beside the use of Lebesgue integrability as an extension of Riemann ... So a lot of nice unrelated things put together

No, Giulio, you're wrong. All the insights about Gödel theorems and similar things would be impossible without the basic observations and arguments by Cantor, and already this starting point is unphysical.

Gödel's theorems are surely different from Russell's paradox etc. but the reason why both of these things are unphysical has nothing whatever to do with the minor technical differences between these mathematical results which is why these technical features of the individual theorems would be entirely off-topic in a discussion of their unphysical character.

"Whether this is “physical” or not is not my concern or that of almost any mathematician."

This is exactly the ivory tower mentality which is at the root of this discord. Synthetic knowledge without analytical insight is just plain data; but analytical knowledge without a connection to the body of our synthetic knowledge, is devoid of meaning. Quine 101.

A fiercely disagree with the epistemology of people like Hilbert whom feel mathematics is just a game with symbols. Sure, one can always ask 'what is the consequences of these axioms in the abstract', but as a physicist, at some point you want to know which axioms are 'true'; axioms which support downstream conclusions which are physical, and which best integrate with your body of synthetic knowledge.

If you want to rely on a bunch of guys navelgazing inside an ivory tower for you to find out which axioms are true, you are doing yourself a big disservice as a physicist.

Thanks for your reply. Ok, let's start from Cantor theorem, agreed. Now the fact that e.g. computer files are countable does not imply that (for instance) the halting problem is unphysical... Or in other words we can think that the "physical numbers" or the "physical Hilbert space" are separable and that's it, why should we apply Cantor's diagonal tool here ? The tool is valid (like any other math theorem), just that usage seems unphysical in that specific context.