It seems there are a lot of respected physicists appearing on pop-sci programs (discovery channel, science channel, etc.) these days spreading the gospel of "we can know, we must know."

Three examples, quickly: 1) Many programs feature Michio Kaku saying that he is on a quest to find an equation, "perhaps just one inch long," which will "describe the whole universe." 2) Max Tegmark has come out with a new book in which he expresses the gut feeling that "nothing is off-limits" to science. The subtitle of this book is My Quest for the Ultimate Nature of Reality. 3) In the series Through the Wormhole there is talk about a search for the "God equation."

(A good counter-example would be Feynman, but his self-described "non-axiomatic" or "Babylonian" approach does not seem popular with physicists today.)

Is there any sense among physicists that it might be impossible to articulate the "ultimate nature of reality" in equations and formal logic? It seems to me that physicists are following in the footsteps of the 19th century mathematicians (led by Hilbert) who were on a similar quest which was put to rest by Gödel's incompleteness theorems in 1931. Is there any appreciation for how the Incompleteness Theorems might apply to physics?

Has any progress been made on Hilbert's 6th problem for the 20th century? Shouldn't this be addressed before getting all worked up about a "God equation?"

Imho, you can safely discount $99.99 \%$ of that as just chatter. Taking advantage of the fecund atmosphere for science popularization, many people (some expert and some not so) are trying their hand at drumming up some excitement.
–
SivaJan 21 '14 at 2:01

4 Answers
4

First regarding: Is there any appreciation for how the Incompleteness Theorems might apply to physics?

To put this in perspective, image Newton said "Oh, looks like my $F = m a$ is pretty much a theory of everything. So now I could know everything about nature if only it were guaranteed that every sufficiently strong consistent formal system is complete."

And then later Lagrange: "Oh, looks like my $\delta L = 0$ is pretty much a theory of everything. So now I could know everything about nature if only it were guaranteed that every sufficiently strong consistent formal system is complete."

And then later Schrödinger: "Oh, looks like my $i \hbar \partial_t \psi = H \psi$ is pretty much a theory of everything. So now I could know everything about nature if only it were guaranteed that every sufficiently strong consistent formal system is complete."

And so forth.

The point being, that what prevented physicists 300 years ago, 200 years ago, 100 years ago from in principle knowing everything about physics was never any incompleteness theorem, but were always two things:

they didn't actually have a fundamental theory yet;

they didn't even have the mathematics yet to formulate what later was understood to be the more fundamental theory.

Gödel's incompleteness theorem is, much like "$E = m c^2$" in the pop culture: people like to allude to it with a vague feeling of deep importance, without really knowing what the impact is. Gödel incompletenss is a statement about the relation between metalanguage and "object language" (it's the metalanguage that allows one to know that a given statement "is true", after all, even if if cannot be proven in the object language!). To even appreciate this distinction one has to delve a bit deeper into formal logic than I typically see people do who wonder about its relevance to physics.

And the above history suggests: it is in any case premature to worry about the fine detail of formal logic as long as the candidate formalization of physics that we actually have is glaringly insufficient, and in particular as long as it seems plausible that in 100 years form now fundamental physics will be phrased in new mathematics compared to which present tools of mathematical physics look as outdated as those from a 100 years back do to us now. Just open a theoretical physics textbook from the turn of the 19th to the 20th century to see that with our knowledge about physics it would have been laughable for the people back then to worry about incompleteness. They had to worry about learning linear algebra and differential geometry.

And this leads directly to

second: Has any progress been made on Hilbert's 6th problem for the 20th century?

One answer is: there has been considerable progress (see the table right at the beginning of the slides or also in this talk note). Lots of core aspects of modern physics have a very clean mathematical formulation. For instance gauge theory is firmely captured by differential cohomology and Chern-Weil theory, local TQFT by higher monoidal category theory, and so forth.

But two things are remarkable here: first, the maths that formalizes aspects of modern fundamental physics involves the crown jewels of modern mathematics, so something deep might be going on, but, second, these insights remain piecemeal. There is a field of mathematics here, another there. One could get the idea that somehow all this wants to be put together into one coherent formal story, only that maybe the kind of maths used these days is not quite sufficient for doing so.

This is a point of view that, more or less implicitly, has driven the life work of William Lawvere. He is famous among pure mathematicians as being the founder of categorical logic, of topos theory in formal logic, of structural foundations of mathematics. What is for some weird reason almost unknown, however, is that all this work of his has been inspired by the desire to produce a working formal foundations for physics. (See on the nLab at William Lawvere -- Motivation from foundations of physics).

I think anyone who is genuinely interested in the formal mathematical foundations of phyiscs and questions as to whether a fundamental formalization is possible and, more importantly, whether it can be useful, should try to learn about what Lawvere has to say.

Of course reading Lawvere is not easy. (Just like reading a modern lecture on QFT would not be easy for a physicist form the 19th century had he been catapulted into our age...) That's how it goes when you dig deeply into foundations, if you are really making progress, then you won't be able to come back and explain it in five minutes on the Dicovery Channel.
(As in Feynman's: If I could tell you in five minutes what gained me the Nobel, then it wouldn't have.)

A little later this month I'll be giving various talks on this issue of formally founding modern physics (local Lagrangian gauge quantum field theory) in foundational mathematics in a useful way. The notes for this are titled Homotopy-type semantics for quantization.

I can only speak from my personal experience (which seems fair enough since this question is subjective). Most physicists I know, including myself, are much more humble on what physics knows now and will know in the future compared to the "celebrity physicists" you mentioned.

It's fairly easy to see from history of the field that whenever we think we become close to explaining it all, something new is observed or some small inconsistency turns out to open up an entire new branch of physics. From these experiences, it seems highly doubtful to me that we would ever become close to pushing the ability of one theory to the point where we have to worry about Goedel's Theorems (that is worry about completeness - after all it could easily be the case that the truth statements which our theory cannot predict are not relevant to our universe i.e. experiments). Furthermore, I've yet to hear a good definition for what we mean by "one theory". After all, QFT is much more of a framework and the Standard Model is just one of many possible applications of that framework. We fit the Standard Model to conform to our observed universe. So what exactly do those physicists mean then by a "god equation"? Do they mean one framework from which multiple equations can arise?

I guess I'm answering questions with questions, but it is only to make the point that these "dream theories" can become idealized to the point of myth.

It seems to me that in the future what will most likely occur is some framework or language that can be used to describe gravity and dark energy in addition the other forces. This framework will be applied to some Standard Model version 2 that incorporates dark matter and other observed matter. That does not mean one equation. It just means one unified way of thinking about things. It will likely lead to many equations with a good number of assumptions that are assumed only because they give they accurately predict experiment.

The question Is there any sense among physicists that it might be impossible to articulate the "ultimate nature of reality" in equations and formal logic?
is about a belief (like a faith of a religion) that most physicists may or may not have. Just like physicists have many different faith, I think physicists
have different believes on this issue. So it is hard to answer yes or no, since physicists do not have a common opinion.

However, I think many physicists share the opinion of Dao-De-Jing 道德经 on a related issue:

2500 years ago, Dao-De-Jing 道德经 expressed the following point of view:
(English translation)

The Dao that can be stated cannot be the eternal Dao.

The Name that can be given cannot be the eternal Name.

The nameless nonbeing is the origin of universe;

The named being is the mother of all observed things.

Within nonbeing, we enjoy the mystery of the universe.

Among being, we observe the richness of the world.

Nonbeing and being are two aspects of the same mystery.

From nonbeing to being and from being to nonbeing is the gateway to all understanding.

Here DAO ~ "ultimate nature of reality".
So the point of view is that "ultimate nature of reality" exists. But any (or current) concrete description of "ultimate nature of reality" in terms of equations and formal logic is not a faithful description of the "ultimate nature of reality".

"Matrix" is a story of two worlds: A real material world and a virtual information world (inside computers). The real material world is formed by elementary particles. The virtual information world is formed by bits. (My point of view) in fact, the real material world is not real, the virtual information world is more real. The material world and the information world is actually the same world.
To be more precise, our world is a quantum information world:

Space = a collection of many many qubits.

Vacuum = the ground state of the qubits.

Elementary particles = collective excitations of the qubits.

In other words, all matter are formed by the excitations of the qubits.

We live inside a quantum computer.

"ultimate reality" = qubits, "God equation" = Shreodinger equation

-- this is AN approximate approach to "ultimate nature of reality" (or DAO).

In your link it mentions that gravity is not unified with the others through String-nets. However, I remember that in your paper with Levin you mention that LQG may be a string-net. I believe the program to join the two was called quantum graphity at one point? I know that there was a paper published on it in Phys. Rev. D. Separately Michael Freeman has played with some String-net like ideas "off lattice" using a "quantum gravity" Hamiltonian. Do you know if there is a good reference for how this program has continued?
–
Matthew TitsworthJan 22 '14 at 4:47

1

I feel that string-net or LQG is a good description of gauge theory. But I still do not see (understand) if string-net or LQG is a good description of gravity or not. Gu and I have a paper on emergent (linearized) gravity, but that is not based on string-net nor LQG (see arxiv.org/abs/0907.1203 ).
–
Xiao-Gang WenJan 22 '14 at 5:50

1

If space is a collection of many many qubits. and elementary particles are collective excitations of the space-forming qubits, an observable consequence is that the U(1)xSU(2)xSU(3) standard model is incomplete. There must be additional Z_2 gauge theory which will lead to new cosmic strings (Z_2 flux-lines). See arxiv.org/abs/1210.1281 (section IV D)
–
Xiao-Gang WenFeb 5 '14 at 14:34

The history of physics shows that each generation of physicists, theoretical and experimentally biased ones, believes at some point that they have found the holy grail or are very close to finding it. Certainly this was true in the nineteenth century when mathematics reigned and theories were so complete and beautiful they thought that all that was left was applications of known theories . That is a type of hubris.

What changed the game was newer and better experiments that showed up inconsistencies in the predictions of their Theory Of Everything (TOE) .

It is fair to suppose that the goal will always be a TOE, and hypothesize that newer and better experimental data will open up again and again the scope of what the TOE describes. Because this has to be said: at the limits of the experimental domains of their applications, newer theories and older theories blend, usually older ones are shown to emerge from the newer ones ( as for example thermodynamics from statistical mechanics ). There is consistency in our theories.

Now as for Godel and his theorem, which I remember from my mathematics course in the form "the set of all sets is open" , as applied to a TOE is not inconsistent with the above view. What may happen though, we will reach the limits of our possible experimental verifications and the openness will be a moot point, going towards metaphysics.

I don't think that "the set of all sets is open" is one of Gödel's incompleteness theorems.
–
Glen The UdderboatJan 21 '14 at 8:40

well it was in a set theory course back in 1960 so I may be paraphrasing, the professor might have proven this using G theorem.
–
anna vJan 21 '14 at 9:25

3

Anna, I think you may be thinking of the Russell Antinomy that shows it cannot be logically consistent to think of the set of all sets as a whole - this leads to the idea of a proper class and type theory. What's really interesting historically is the quote by Cantor that I give here mathoverflow.net/a/66187/14510. I find it fascinating that Cantor seems to have grasped that there are concepts that are off limits to set theory: he was well aware of the Russell Antinomy and seemed to take it in his stride: he just takes it as a given that sometimes set concepts are not well founded, ...
–
WetSavannaAnimal aka Rod VanceJan 22 '14 at 4:37

... and seems to think that's OK - implicitly I think he was saying to Dedekind that the onus was on the mathematician to check that his or her sets were not, as he called them "infinite or inconsistent multiplicities". What a deft sidestep: "I define my theory to be sound whenever it is sound!": although sounding like a bit of a con, is really quite a stroke of genius. It is a shame that he never published his ideas on "infinite or inconsistent multiplicities", probably because Kronecker and others influential in mathematics publishing were dead against him.
–
WetSavannaAnimal aka Rod VanceJan 22 '14 at 4:42

BTW - there are definite likenesses between the construction of Gödel unprovable sentences and the set of all sets that do not contain themselves as members in the Russell Antinomy as well as the construction of uncomputable numbers (with respect to a given language)- they are a generalised version of the Cantor Slash argument (I must say, the English name of this procedure is the most apposite of all - most languages render it "Cantor Diagonalisation", but Cantor Slash conveys the mathematical violence the argument does!)
–
WetSavannaAnimal aka Rod VanceJan 22 '14 at 4:46