I'll write the question but I'm not fully confident of the premises I'm making here. I'm sorry if my proposal is too silly.

Hilbert's sixth problem consisted roughly about finding axioms for physics (and it was proposed in $1900$). I guess that at the time, such thing was impossible due to the nature of physics which is mainly based on observations and models. But it seems that after Gödel's work on $1931$, the axioms which were seen as self-evident truths started to be seen as unprovable statements and the job of a mathematician is grossly about deriving theorems from these axioms.

So if this shift of axiomatic conception really happened, couldn't we just accept anything (including the physical observations) as axioms and reason about their consequences? Thus somehow solving Hilbert's sixth problem?

This question seems to be based on a misconception about incompleteness and the point of axiomatizing. We seek concise axioms as a way to understand how well a theory can be described when based on a few organizing principles. If you take all observations as axioms, you do not gain understanding. The answer to the title question is: no substantial change.
–
Scott CarnahanNov 23 '13 at 3:22

1

The physical observations are all approximate, and some of them are wrong, so you would also wind up with contradictions. Nor can you predict anything without a law, but the law never follows axiomatically from the observations.
–
joseph f. johnsonNov 27 '13 at 23:51

3 Answers
3

Although maybe the axioms were taken as self-evident for mathematics, Hilbert did not really want mathematically self-evident axioms to be the basics for physical axioms. Since Gauß and the hyperbolic space, it is well known that you can get equally valid models from different assumptions that could all be seen as "self-evident". Do we have convex, hyperbolic or Euclidean geometry? This depends on your axioms and without physics, you can't get there. So, whatever the axioms should look like, they must contain some physics.

As I see it, Gödel's theorems therefore had no influence on this problem in the sense you describe. Hilbert's idea was to start with a bunch of axioms that could explain a large class of physical phenomena and then successively add axioms to explain more phenomena and come closer to reality. Obviously, in each step you have to mathematically prove that all the old results remain valid and that your axioms are not inconsistent (Gödel's theorems, of course, have an impact on this). He didn't say anything about how to get the axioms. I guess that what he really had in mind is something like the special theory of relativity. You take the invariance of the speed of light and the relativity principle, which you can formulate in mathematical terms, and you take a few axioms of the underlying geometry and from there, you can derive special relativity. In particular, he was concerned with statistical mechanics (and later quantum mechanics) in the sense that people used mean values and thermodynamic limits to get results - concepts that had no solid mathematical foundations at the time (the axiomatization of probability theory was done by Kolmogorov some time later, thermodynamic limits remain, as far as I know, very often problematic).

This means, as already sort of pointed out in the comments, the problem is really about mathematically rigorous physics. Hilbert's idea was to make mathematics rigorous wherever it was used - and rigorous in his sense (which is still the sense today) is that you start with a bunch of axioms and derive everything from there.

So in a sense, Hilbert didn't care about where the axioms come from, he just wanted to have some. He didn't even make a restriction on the amount of axioms. Of course, we could just put it all together, describe the remaining problems by just taking the observations as axioms - but then we might either get contradictions/inconsistencies that we can prove, which means that we have to discard these axioms - or we have a myriad of axioms that are not really connected and very peculiar, which is not really satisfying. What we want is a "minimal" system of axioms for our theories, otherwise it gets too complicated.

Are we there, yet? By no means. We have a few theories with good axiomatic foundations (like ordinary quantum mechanics, general relativity or classical mechanics), where we have "a" solution, but maybe not a nice one (Hilbert spaces in quantum mechanics? Not very intuitve... C*-algebras are a bit better, but still) and a few with work in progress - like quantum field theory (e.g. renormalization is not rigorous at many points. Path integrals aren't. All this stuff). And we certainly aren't anywhere really close to having a mathematically rigorous TOE.

Hilbert's Sixth problem is not the same as finding the theory of everything and then making the maths rigorous. This is a very common misconception, and has led to people thinking that making renormalisation in QFT rigorous was the main thing to do.

But in fact Hilbert stated explicitly that it would be just as important to axiomatise false physical theories. I interpret this as: well, QM is false since it is not generally covariant, and GR is false since it is not quantum, but it is still important to see whether or not they can be axiomatised.

Archimedes, Newton, Maxwell, and Hertz were all true physicists who published axiomatic treatments of a branch of Physics. Ironically, although Hertz and Maxwell are most famous for their contributions to Electricity and Magnetism, they published axiomatisations of Mechanics alone.

Another misconception is that Kolmogoroff solved the part of Hilbert's problem related to probabilities. This misconception was not shared by Kolmogoroff! He well knew that axiomatising the purely mathematical theory of probabilities was merely a useful preliminary: what Hilbert really wanted was to axiomatise the concepts of physical probability. Within physics, is 'probability' a new, primitive, concept to be added to Hertz's list, along with mass and time, or can it be precisely defined in terms of mass, time, etc. ?

Unless grand unification or renormalisation throw up new axiomatic difficulties, then the only two things left to do to solve Hilbert's Sixth PRoblem are: a) the problem which Wigner pointed out, about the concept of measurement in QM (Bell analysed http://www.chicuadro.es/BellAgainstMeasurement.pdf the problem the same way Wigner did), and b) the definition of physical probability, i.e., the concept of probability which occurs in QM. Hilbert himself was worried about causality in GR but solved that problem himself. Hilbert pointed to the lack of clarity in the relation between Mechanics and Stat Mech, but Darwin and Fowler solved that in the 1920s.

Many physicists, notably H.S. Green in "Observation in Quantum Mechanics," Nuovo Cimento vol. 9 (1958) no. 5, pp. 880-889, posted by me at http://www.chicuadro.es/Green1958.ps.zip, and now more realistic models by Allahverdyan, Balian, and Nieuwenhuizen arXiv:1003.0453, have pointed to the possiblity of fixing the 'measurement' problem Wigner was worried about: they have analysed the physical behaviour of a measurement apparatus and shown that the measurement axioms of QM follow, approximately, from the wave equation. They do this in a logically circular and sloppy way, but the logic can be fixed.

Physical probability can be defined in QM, and its definition there is parallel to its definition in Classical Mechanics: each involves the use of a new kind of thermodynamic limit (in the quantum case http://arxiv.org/abs/quant-ph/0507017, one in which not only does the number of degrees of freedom of the measurement apparatus increase without bound, but Planck's constant goes to zero).

So the people who did the most important work are: Hilbert, Wiener, Weyl, Schroedinger, Darwin, Fowler, Kolmogoroff, Wigner, Khintchine, H.S. Green, Bell, Prof. Jan von Plato, and myself. (Schroedinger could be included twice: he and Debye helped Weyl formulate the first axiomatisation of QM. Later, he influenced H.S. Green in his treatment of measurement as a phase transition.)

More specifically as to your particular concerns

Goedelisation Speaking historically, Goedel's incompleteness theorem has had no influence on those working on this problem. Whether this was myopia or higher wisdom will now be addressed.

Since Physics is about the real world, there are no real worries about its consistency. What is not clear is whether it needs to contain Peano arithmetic. Sets are not physically real, so numbers are not either. It is not even clear whether Physics needs the second-order parts of Logic that produce incompleteness. The usual axioms of QM contain a typical Hamiltonian dynamics, so all physical questions of the form «If the system begins in state $\psi_o$ at time $t_o$, what will be its state at time $t$ ? » are answerable in closed form, and computable to any degree of approximation desired, so the system is physically complete, so to speak. Note that all these questions are essentially first-order questions.

As someone else pointed out, relative consistency is just as interesting as consistency, and there are no real worries about that, either.

Hilbert himself explicitly pointed to his own axiomatisation of Euclidean Geometry as an example for Physics. Those axioms do not allow one to define sets or to construct all the real numbers.

Computability

Some have tried to argue that since one can physically build a computer (or even a Turing machine), then the axioms of Physics must imply everything that the theory of computation implies, including its own incompleteness. But this is obviously false: it is physically impossible to build a noiseless digital computer. The Boolean world can only be approximately realised by physically constructed machines. But the proofs of incompleteness are invalidated once you introduce the notion of approximation. No one has even formulated a theory of physically realisable devices that would be parallel to the idealised theory of computation which mathematicians invented.

And conversely: other have tried to argue the other way, that since Physics (certainly QM) is approximately computable, that therefore it must be incomplete. To me this seems merely confused. Not every computable theory satisfies the hypotheses of Goedel's incompleteness theorem: First-order Logic is computable, decidable, complete, and consistent (theorems of Goedel and Herbrand).

undecidable problems in Maths are not physical

Examples of undecidable problems in Maths are: given any set of generators and relations, decide whether the group they determine is non-trivial or not.

Well, Physics doesn't use generators and relations.

Involving Hilbert Spaces: I do not know whether it is undecidable, but it is certainly a wild problem to classify up to unitary equivalence pairs of operators on a given Hilbert space. But in QM, because of relativity, not every subspace of a Hilbert space is physical. Let $G$ be the Lorentz group and $K$ be a maximal compact subgroup of $G$. The only physically significant Hilbert spaces are those with a dense subspace of $K$-finite vectors. So the only physically significant subspaces $V$ of a given Hilbert space are those whose intersection with the $K$-finite vectors are dense in $V$. So no operator whose image does not satisfy this property can be «physical». This tames the problem considerably, essentially reducing it to algebra instead of analysis.

The Halting Problem: many people on this site have already tried to argue that since it involves infinite time behaviour, this is an unphysical problem. To me, this objection seems too easy and philosophical. The stronger objection is that there are no digital computers in the real world. No Turing machines. Because all we can make are noisy approximations to a digital computer or a Turing machine. There are no exactly self-reproducing units in Nature, only approximately reproducing units. (This makes a big difference as to the probabilities involved.) Now, since the theoretical conclusions about incompleteness etc. depend on the precise behaviour of these idealisations, there is no reason to think they hold good for actual noisy machines which halt on their own because they get tired...

¿What would it take to make a Goedel-style revolution in Physics?

Many physicists have already decided that it has taken place--but they are not the ones working on Hilbert's Sixth Problem. Wigner and Bell were capable of understanding Hilbert's axiomatic attitude, and Wigner's analysis of the problem with the axioms of QM is thoroughly in Hilbert's spirit. If the problem Wigner pointed to could not be solved, and if QM stays (in this respect) a fundamental part of Physics (as both me and Steven Weinberg are convinced it will---unlike J.S. Bell, who was convinced the problem was insoluble and therefore QM would be reformed in such a way as to remove the difficulty), then Hilbert's Sixth problem will have suffered the same blow Goedel dealt to his Second Problem. Many physicists have decided, by anticipation, that this is the case.

But there are at least two mainstream views that believe that Wigner's problem can be resolved by demoting the measurement axioms to approximations which can be deduced from the other axioms. The decoherence theory is not yet the consensus of the Physics community, but would save Hilbert's bacon. There are many posts on this forum about the decoherence theory. The line of reasoning I prefer, initiated by H.S. Green and made more realistic by Allahverdyan et al., referred to above, does the same (even though they were not concerned with Hilbert's concerns and thus do not do things in a logically clear way: they make free use of all six axioms while analysing the physics of a measurement apparatus). Feynman was of the opinion that something like this could be done.

The differences between the decoherence approach and the phase-transition approach are physical and should, eventually, be susceptible to experimental tests, to rule out one or the other approach.

Peter Selinger's work 'Generators and relations for n-qubit Clifford operators' is classified as 'Quantum Physics', so is it really true Physics doesn't use generators and relations? Also, isn't Anton Zeilinger's claim that “quantum randomness is irreducible and a manifestation of mathematical undecidability” suggested by his experiment given appropriate coding, “whenever a mathematical proposition is undecidable within the axioms encoded in the state, the measurement associated with the proposition gives random outcomes.”? It's not clear your claims above are true if others disagree.
–
user34445Nov 28 '13 at 18:26

I addressed «qubits» when I pointed out that all idealisations of computing machines are unphysical. This applies to qubits as well. When I say most physicists, I exclude Zeilinger. The decoherence approach, although I do not agree with it, is quite widespread, a little vague, and not (yet) the consensus. But as I understand it, the decoherence approach agrees in deducing quantum randomness from Schrodinger's equation. If something like this is not accomplished, then I pointed out that Wigner's critique of the axiomatisation of QM would indeed be decisive: the axioms would be inadequate.
–
joseph f. johnsonNov 28 '13 at 20:00

All mathematical descriptions of physical systems are also idealisations so - unphysical. How are mathematical descriptions of physical systems so different from idealisations of computing machines? It seems arbitrary to accept one and reject the other .... With respect to axioms being inadequate, I've already addressed that with the apple illustration. Probable behaviour conforming to ideal behaviour is sufficient epistemological grounds for asserting axiomatic ontologies. Newton's gravity, it turns out, is inadequate given relativity, yet we didn't, and haven't discarded it have we?
–
user34445Nov 28 '13 at 20:27

In the context of Hilbert's problem, we are to take the model as being exact and deduce its consequences stricly logically. In the context of the Theory of Everything, there are still some physicists...perhaps not too many...like Steven Weinberg, who hope to get an exact theory. In the context of applying the results of the theory of computation, since they are mathematical deductions, we don't know if they remain valid if the mathematical model is merely an approximation. They're only valid if the model is exact. The conclusions aren't robust. A revision will emphasise these points.
–
joseph f. johnsonNov 29 '13 at 0:26

You haven't answered the question, just provided more unfounded assertions ... On what basis are idealizations to be judged acceptable, or not? Exact theories tend to be the domain of metaphysics only, and physics meaning nature, rarely works that way .. nevertheless nothing in logic, at least, says an axiomatic system must be constrained like this. The argument a computing machine is 'unphysical' addresses nothing since most 'physical laws' are abstract metaphysical descriptions of how we believe nature to be, subject to correction given observation. The world differs from its description!
–
user34445Nov 29 '13 at 4:10

"But, despite their remoteness from sense experience, we do have something like a perception of the objects of set theory, as is seen from the fact that the axioms force themselves upon us as being true. I don’t see any reason why we should have less confidence in this kind of perception, i.e. in mathematical intuition, than in sense perception.

This suggests that he, meaning Gödel, lent the same epistemological weight to mathematical intuition as he did to sense perception which would be one way of solving this problem because it means that direct physical observation is no less valid an epistemological method as metaphysical reasoning.

I don't see the relevance of epistemology to axiomatisation. This would be more like an answer if you could explain this. The apparent difficulty which, unless you explain it away, would lead one to think this is no answer at all, is that physical observation and epistemology have to do with the truth of an axiom, but are useless in studying the adequacy, completeness, and independence of the axioms. The difficulty of how to relate the measurement axioms of QM to the Schroedinger eq. axioms is a case in point: all five are true, at least approximately. The question is their logical relation.
–
joseph f. johnsonNov 26 '13 at 1:53

Could you tell me what happens when you drop an apple, and why? (By the way epistemology is the study of knowledge, meaning how we know stuff, not truth)
–
user34445Nov 26 '13 at 21:45

Well, the distinction between how we know P is true, and the truth of P, although valid, is irrelevant to my question. Neither are relevant to axiomatics, and neither are relevant to the concerns of Wigner and Bell about Dirac's axiomatistaion of QM. I think that neither are relevant to Hilbert's concerns. Have you read Leo Corry on Hilbert's Sixth Problem?
–
joseph f. johnsonNov 27 '13 at 0:02

It’s not clear you’ve read Corry (who makes this paper available on his homepage). It was Jean Dieudonné, says Corry, who suggested 'one does not speak about truths’ and this attitude later became attributed to Hilbert (says Corry), but in fact, Hilbert was diametrically opposed to such a notion as Corry himself notes - look at the Hilbert quote he provides on page 5. Regardless, lets assume we cannot know in a metaphysical sense it is true that 'the apple will fall to the ground’. We have observed it enough times that such an outcome is probable.
–
user34445Nov 27 '13 at 1:38

Since this is how the apple seems to behave. We can speak of probability in terms of degree of truth (or truth in terms of probability). The relationship between our mere observation of probabilies and our concept of what is ‘true’ (and describable) is precisely what Hilbert wrangled with. Do you know of the mind-body problem of QM? I hope you realize that if our act of observing a system influences the system itself, this is exactly an epistemological problem ...
–
user34445Nov 27 '13 at 1:39