Pages

Monday, March 16, 2020

Unpredictability, Undecidability, and Uncomputability

There are quite a number of mathematical theorems that prove the power of mathematics has its limits. But how relevant are these theorems for science? In this video I want to briefly summarize an essay that I just submitted to the essay contest of the Foundational Questions Institute. This year the topic is “Unpredictability, undecidability, and uncomputability”.

Take Gödel’s theorem. Gödel’s theorem says that if a set of consistent axioms is sufficiently complex, you can formulate statements for which you can’t know whether they are right or wrong. So, mathematics is incomplete in this sense, and that is certainly a remarkable insight. But it’s irrelevant for scientific practice. That’s because one can always extend the original set of axioms with another axiom that simply says whether or not the previously undecidable statement is true.

How would we deal with mathematical incompleteness in physics? Well, in physics, theories are sets of mathematical axioms, like the ones that Gödel’s theorem deals with, but that’s not it. Physical theories also come with a prescription for how to identify mathematical structures with measurable quantities. Physics, after all, is science, not mathematics. So, if we had any statement that was undecidable, we’d decide it by making a measurement, and then add an axiom that agrees with the outcome. Or, if the undecidable statement has no observable consequences, then we can as well ignore it.

Mathematical theorems about uncomputability are likewise scientifically irrelevant but for a different reason. The problem with uncomputability is that it always comes from something being infinite. However, nothing real is infinite, so these theorems do not actually apply to anything we could find in nature.

The most famous example is Turing’s “Halting Problem”. Think of any algorithm that computes something. It will either halt at finite time and give you a result, or not. Turing says, now let us try to find a meta-algorithm that can tell us whether another algorithm halts or doesn’t halt. Then he proves that there is no such meta-algorithm which – and here is the important point – works for all possible algorithms. That’s an infinitely large class. In reality we will never need an algorithm that answers infinitely many questions.

Another not-quite as well-known example is Wang’s Domino problem. Wang said, take any set of squares with different colors for each side. Can you use them to fill up an infinite plane without gaps? It turns out that the question is undecidable for arbitrary sets of squares. But then, we never have to tile infinite planes.

We also know that most real numbers are uncomputable. They are uncomputable in the sense that there is no algorithm that will approximate them to a certain, finite precision, in finite time. But in science, we never deal with real numbers. We deal with numbers that have a finitely many digits and that have error bars. So this is another interesting mathematical curiosity, but has no scientific relevance.

What about unpredictability? Well, quantum mechanics has an unpredictable element, as I explained in an earlier video. But this unpredictability is rather uninteresting, since it’s there by postulate. More interesting is the unpredictability in chaotic systems.

The most famous candidate for such an unpredictable equation is the Navier-Stokes equation that is used, among other things, to make the weather forecast. Whether this equation sometimes leads to unpredictable situations is one of the Clay Institute’s Millennium Problems, one of the hardest open mathematical problems today.

But let us assume for a moment this problem was solved and it turned out that, with the Navier stokes equation, it is indeed impossible, in some circumstances, to make predictions beyond a finite time. What would this tell us about nature? Not a terrible lot, because we know already that the Navier-Stokes equation is only an approximation. Really gases and fluids are made of particles and should be described by quantum mechanics. And quantum mechanics does not have the chaotic type of unpredictability. Also, maybe quantum mechanics itself is not ultimately the right theory. So really, we can’t tell whether nature is predictable or not.

This is a general problem with applying impossibility-theorems to nature. We can never know whether the mathematical assumptions that we make in physics are really correct, or if not one day they will be replaced by something better. Physics is science, not math. We use math because it works, not because we think nature is math.

All of this makes it sound like undecidability, unpredictability, and uncomputability are mathematical problems and irrelevant for science. But that would be jumping to conclusions. They are relevant for science. But they are relevant not because they tell us something deep about nature. They are relevant because in practice we use theories that may have these properties. So the theorems tell us what we can do with our theories.

An example. Maybe the Navier-Stokes equation is not fundamentally the right equation for weather predictions. But it is the one that we currently use. And therefore, knowing when an unpredictable situation is coming up matters. Indeed, we might want to know when an unpredictability is coming up to avoid it.
This is not really feasible for the weather, but it may be feasible for another partly chaotic system, that is plasma in nuclear fusion plants.

The plasma sometimes develops instabilities that can damage the containment vessel. Therefore, if an instability is coming on, the fusion process must be shut down quickly. This makes fusion very inefficient. If we’d better understand when unpredictable situations are about to occur, we may be able to prevent them from happening in the first place. This would also be useful, for example, for the financial system.

In summary, mathematical impossibility-theorems are relevant in science, not because they tell us something about nature itself, but because we use mathematics in practice to understand observations, and the theorems tell us what can expect of our theories.

You can read all the essays in the contest over at the FQXi website. The deadline was moved to April 24, so you still have time to submit your essay!

104 comments:

I like Turing's halting uncomputability result as it makes the idea, or philosophical theories, that the future is fully determined by the past irrelevant. The future might be fully determined by the past, but the exact prediction comes down to performing an irreducible computation following Turing's proof which requires all the 'computing' power or the universe within the causal light cone. Which means that the shortest (and only) way to know the future is to live it.

In short, I think to know that the past fully determines the future is as pointles as to know that the present has a past.

If we think of the universe as a computer, it appears to a computer that only produces one result. One structure can model another but it is never the same so Godel-like self reference looping is out. Nice.

1. Current paradigms as to the import, and significance, of Goedel's `Undecidability/Incompleteness' Theorems vividly illustrate that we, as a society, are guilty of conflating what is widely believed as known with what is evidence-based knowledge.

3. We now know that the first-order Peano Arithmetic PA has a finitary model. Hence it is NOT omega-consistent and does NOT admit Rosser's Rule C in formal proofs. See Corollary 8.4 on p.42 of the paper published in the December 2016 issue of Cognitive Systems Research:

4. Hence, by Theorem 7.1 (p.41 of the above paper) any mathematical entailments of PA are necessarily faithful to the Turing-computable functions/relations which they seek to represent.

5. Undecidable arithmetical propositions are entailed by the Hilbert-inspired---and equally venerated---belief that any second-order Ordinal Arithmetic is consistent.

6. We have always known that any second-order Ordinal Arithmetic has no finitary model since it contains a set-theoretical Axiom of Infinity that is not falsifiable.

7. Hence such a Hilbert-inspired belief too is not falsifiable, and we cannot claim entailments of OA as necessarily faithful to that which they seek to represent mathematically.

8. Mathematics is merely the set of language in which the natural sciences can, first, unambiguously express their conceptual metaphors corresponding to the physical phenomena they observe/measure; and, second, communicate them categorically.

9. Whereas second-order Ordinal Arithmetics are adequate to the first task, they admit concepts that are not falsifiable.

10. It is only the first-order Peano Arithmetic PA that can categorically communicate those set-theoretical concepts that are falsifiable (since they are Turing-computable/verifiable).

11. Theoretical scientists prefer to construct set-theoretical systems/models for expressing conceptual metaphors corresponding to the physical phenomena we observe/measure, even when not falsifiable, for the putative insights such fictions may yield as to what is falsifiable.

13. When viewed from such an evidence-based perspective, the two disciplines are complementary, not contradictory.

14. Future generations will, hopefully, have evolved some day to wondering WHY our current educational, social, and political, paradigms fed confrontations that weaken the fabric of society, whilst starving the accommodations that strengthen it.

Bhupinder Singh Anand wrote:>3. We now know that the first-order Peano Arithmetic PA has a finitary model.

Can you clarify that? You don't mean a finite model, do you?

Bhup also wrote:>11. Theoretical scientists prefer to construct set-theoretical systems/models for expressing conceptual metaphors corresponding to the physical phenomena we observe/measure, even when not falsifiable, for the putative insights such fictions may yield as to what is falsifiable.

Ummm... when I studied classical mechanics, I do not remember a great deal of set theory! In fact, I am pretty sure that Galileo and Newton knew no set theory at all.

May I interject with an observation ? It goes without saying that simply because a mathematical structure (set theory) was not introduced in a topical course (classical mechanics) does not imply that such a mathematical structure is not--at some juncture--important to said topic. A paper such as "Axiomatic Foundations of Classical Particle Mechanics" (1953, McKinsey, Sugar, Suppes, Journal of Rational Mechanics and Analysis) might comes as a surprise to many physicists. But, you do not have to go that far back: V.I. Arnold's Mathematical Methods of Classical Mechanics is sufficiently different from (say) Goldstein's classic so as to be unfamiliar (still !) to some physicists. The course notes "Geometrical Mechanics" (1968) from mathematician Saunders MaClane are a rich source for physicists, also. Finally, an Enrico Fermi School of physics conference, Problems in the Foundations of Physics (1979), has an article "Formal Analysis of Physical Theories" (pages 134-202, by Chiara and Francia) which is relevant to the discussion.

"The word finitary has no universally agreed-upon precise definition, but following custom, we will use it informally to mean methods of mathematical proof that try to avoid, or minimize, assumptions about infinite quantities and processes"

Arun, yeah, the term "finitary" does not have a single meaning to which everyone adheres, as your example shows, which is why I asked what Bhup means by it. This is actually a serious matter in math foundations, not just a choice of vocabulary. Given his confusion on other points, I suspect he does not really know what he means.

Gary, the problem with the quotes I gave from Bhup is that he was claiming that theorists and experimentalists actually do certain things that most of them assuredly do not do!

Now, if Bhup had tried to claim that in some logical sense set theory underlies their work, that would be debatable (though still wrong).

I just chose the most obvious examples from his paper that even those unfamiliar with mathematical logic could grasp.

The paper claims to show that the Peano axioms are false for the natural numbers. If he were correct about this, this would be the most important discovery in mathematical logic in more than a hundred years, and he would have been heralded in the New York Times, etc.

Of course, he wasn't, because his paper goes beyond simply being wrong.

Physicist Dave, hopefully I did not give the impression that I was disputing your assessment of Bhup's paper, as that was not my intent. My intent was to emphasize how mathematicians and physicists are still speaking different languages. V.I. Arnold wrote: "The scheme of construction of a mathematical theory is exactly the same as that in any other natural science." and "A teacher of mathematics, who has not got to grips with at least some of the volumes of the course by Landau and Lifshitz, will then become a relic-- like the one nowadays who does not know the difference between an open and a closed set."(1997, On Teaching Mathematics). By the way, speaking of impossibility theorems, the book "Abel's Theorem in Problems and Solutions (Alekseev, based on Arnold's Lectures)" is available in pdf form. My conundrum, physicists (even theoretical physicists) and mathematicians are still not "on the same page."

I don’t feel confident, or competent, enough to engage with you---as perhaps is your due---on the various issues you’ve raised; but I feel it only fair in a public discourse to acknowledge that you have justifiably highlighted my loose generalisation which conflates your experience---of how scientists you have personally known or studied underpinned the foundational basis of their thinking---with my perspective of how I imagined such scientists to have implicitly underpinned the foundational basis of their thinking.

I have also read Bhup's Web link, but not his other work. Although I have criticisms too, I shall try to be a bit more constructive and link this response to the wider topic of Sabine's post here.

Focussing on the Mathematical Logic aspects there are several topics discussed together in Bhup's paper and they will need to be communicated separately to make sense to a wider audience.

The first point is that there are references to PA and First Order Logic, but a key aspect seems to be the use of Constructive Logic and references to Brouwer. However what is *not* written is important too.

If we take the Axioms of PA and change the logic from FOL to Constructive Logic (loosing P V ¬P ==T) we change the theory to "Heyting Arithmetic" HA. HA has been much studied and has some different meta-properties to PA. HA is more amenable to a "Turing Machine Interpretation" than PA, and Kleene invented a "Realizability Interpretation" for this purpose.

Godel's incompleteness theorem is one theorem that still applies to HA, and because of the "constructiveness" underlying both Godel's results and HA this application could suit Sabine's concerns about "Infinite objects not being part of Physics". In short when referring to "Mathematics" Sabine (and most physicists) are referring to "Mathematics under classical logic", but "Mathematics under Constructive Logic" is interesting - and introduces some different physics-like constructs. Plus of course the "Constructivism" argues for explicitness.

I think that the reference that PhysicsDave was concerned with was the one De Morgan law that does not hold in Classical Logic, namely: ¬(All x) ¬ F(x) ==> Exists (x).F(x).Bhup is quoting this law and its violation. However mixing Constructive Logic and Classical Logic in one paper makes it hard to read, and unfortunately prone to error too.

Another result that Bhup does not quote, but should refer to is Tennebaum's theorem, which states that the only recursive model of PA is the "standard model". Since he has a Turing Machine style semantics, one can see that one might "prove" that PA only has one model, and thus generate more contradictions with standard FOL meta-theory.

My advice to Bhup is recognise that most physicists and mathematicians are indeed unfamiliar with Constructive logic and mathematics, but be careful not to confuse it with Classical logic in papers.

And to Sabine : "Mathematics" can mean finite mathematics and finite automata too (which have their own limitative results of course); and it can mean Constructive mathematics which although not designed for Physics, just might have some interesting constructs...

There are quite a number of people who have tried to submit conspiracy theories or supposed cures to this thread. Folks, it's time that you learn to shut the fuck up if you are not an expert, and you clearly are not. You are actively harming people. Stop it. Also, you are wasting your time. I am moderating comments and will not aid the spread of nonsense.

Yes, it is possible to construct sentences that we know are true and are unprovable. The paradox is only apparent because "unprovable" refers to some fixed axiom system S; and the truth is of the sentence is obtained by reasoning in another system. So, as suggested by Sabine in her post, one could then add the unprovable sentence to S as an axiom, to obtain a more powerful system S'. Then in S' too, one could construct a true but unprovable statement which could also be added to form an even more powerful system S''. And you can continue ad Godelum nauseum.

I don't see how this explains how an unprovable statement can be true. In any case, the Wikipedia entry explains where this way of putting Goedel's theorem comes from and also clarifies that the notion of "truth" used in the statement that John made does not come from within the system in which the statement is unprovable (which makes sense).

@Sabine: Perhaps it is a rather confusing way to express the theorem, I don't know, I am certainly no expert, but it is the way I have learned/memorized the theorem. And in the Wikipedia page you link to, one also finds the following assertion:

'The first incompleteness theorem shows that the Gödel sentence G_F of an appropriate formal theory F is unprovable in F. [...] For this reason, the sentence GF is often said to be "true but unprovable."'.

But of course, your formulation of the theorem is also supported by the same Wikipedia page:

"Any consistent formal system F within which a certain amount of elementary arithmetic can be carried out is incomplete; i.e., there are statements of the language of F which can neither be proved nor disproved in F."

2. At the 1900 International Congress of Mathematicians in Paris, David Hilbert had posed twenty open problems, the second of which was: Is there a finitary proof that Peano Arithmetic is consistent?

3. In a 1930 paper, ‘Die Grundlegung der elementaren Zahlenlehre’, Hilbert had hypothesised/proposed an omega-Rule of infinite induction, to the effect that:

If we can prove for a formula [F(x)] of a Peano Arithmetic, say the first-order PA, that each of the formulas in the non-terminating sequence [F(0)], [F(1)], ... is provable from the axioms of PA, then we can add the formula [(All x)F(x)] as an axiom to PA without inviting contradiction.

5. Goedel showed how to finitarily define an arithmetical formula [G(x)] such that, if an Arithmetic such as PA is consistent, then [(All x)G(x)] is not provable from the axioms of PA. However, each of the formulas in the non-terminating sequence [G(0)], [G(1)], ... is provable from the axioms of PA.

6. Goedel further defined PA as omega-consistent if, and only if, we cannot have both (a) that [Neg(All x)Neg F(x)]---abbreviated as [(Exists x)F(x)]---is PA-provable, and (b) that each of the formulas in the non-terminating sequence [Neg F(0)], [Neg F(1)], ... is provable from the axioms of PA.

7. He then showed that if PA is assumed omega-consistent, then [Neg(All x)G(x)] is also not provable from the PA axioms.

8. Goedel explicitly highlighted that the assumption of omega-consistency was necessitated by the absence of a finitary model of PA in which the formulas of PA could be categorically interpreted/referred to as true/false.

9. The following paper, published in the December 2016 issue of Cognitive Systems Research, now gives the needed categorical definitions of arithmetic truth/falsity, thereby yielding the finitary proof of consistency for PA sought by Hilbert:

10. It thus defines a finitary model, say TM(PA), for PA in which a formula of PA is true if, and only if, it can be evidenced/realised as true by a Turing machine under Tarski’s definitions of the satisfiability and truth of the propositions of a formal system under a well-defined interpretation.

11. It is in this sense that Goedel’s formula [(All x)G(x)] is unprovable in PA, but each of the provable formulas in the non-terminating sequence [G(0)], [G(1)], ... interprets in TM(PA) as an algorithmically true arithmetical proposition under Tarski’s definitions.

12. Another way of expressing this is that the PA-formula [G(x)] is unprovable in PA but, for any given PA-numeral [n], the PA-formula [G(n)] is true under any well-defined interpretation of PA over the domain N of the natural numbers.

Point 5 is nowadays called "omega-incompleteness", which indeed is a corollary of Godel's theorem (and of consistency of PA). It should not be confused with "omega-inconsistency" which is an assumption in part of Godel's theorem.

Admittedly even the Penrose discussion of the "Godelian thesis" fails to mention "omega completeness" which one is obliged to accept, if one assumes consistency.

Assuming that ‘Speaking of assuming consistency you might need to consider where "Godel's 2nd theorem" fits into your scheme’ is not merely a well-meant suggestion, but also an implicit query given your apparent familiarity with the subject of this blogpost, here’s my understanding of Goedel’s 2nd theorem.

1. There is a little-known, hitherto unresolved, objection to Goedel’s implicit assumption---in his Theorem XI (“Godel's 2nd theorem” in Goedel 1931)---that there is a well-defined formula [w], in his formal system P of arithmetic, which would admit the premise (see Martin Davis, The Undecidable, 1993 edn., p.36, fn.63):

“Let [w] be the SENTENCE by which Wid(P) is expressed in P".

In the above premise we denote that a string of symbols is a formula of a formal system by enclosing it in square brackets; moreover:

* Form(x) denotes the primitive recursive relation that holds if, and only if, x denotes the Goedel-number of a well-formed formula of P;

* yBx denotes the primitive recursive relation which holds if, and only if, y denotes the Goedel-number of a proof sequence in P which terminates with the P-formula whose Goedel number is denoted by x.

Note that (Ex)(Ey)(Form(x) & yBx) is not a formula of P, but an abbreviation for/symbolically expresses the meta-assertion `There is an arithmetic formula that is not provable in P’; which further implies that P is consistent.

2. Now Wid(P), like the number-theoretic assertion denoted by Bew (x) (which too is merely an abbreviation for ‘x is the Goedel-number of a provable formula of P’), is not primitive recursive.

Hence we cannot conclude that:

(a) If Wid(P) holds in the domain N of the natural numbers, then the formula [w] is provable in P;

(b) If neg Wid(P) holds in N, then the formula [neg w] is provable in P.

3. In other words, for Goedel's claim in, and interpretation of, his Theorem XI to be justified, his reasoning must not only show that:

(i) The number-theoretic assertion expressed symbolically by Wid(P) can also be expressed by some well-formed formula [w] of P as above;

but also that:

(ii) Under any well-defined interpretation of P over N, the formula [w] must be shown to interpret as an arithmetical assertion which implies that P is consistent.

Neither of these follows from Goedel's outline of his proof of Theorem XI in Goedel 1931.

4. As remarked by Elliott Mendelson (Introduction to Mathematical Logic, 1964 ed., p.149), in his 1931 paper Goedel assumed without proof that [w] can be admitted as a well-formed formula of P.

The sequel in which Goedel intended to justify any implicit assumptions that he might have made in the outlined proof of Theorem XI in Goedel 1931 was never written.

Hence the necessity of establishing that [w] can, indeed, be admitted as a well-defined formula of P in order to claim that [w] represents Wid(P) in P---and can therefore be formally interpreted as ‘P is consistent’ in N---was highlighted only in 1960 by Solomon Feferman's paper ‘Arithmetization of metamathematics’ (Corollary 5.10).

5. It now follows from the consequences of the finitary proof of consistency for the first-order Peano Arithmetic PA---Theorem 6.7 of the cited paper published in the December 2016 issue of Cognitive Systems Research---that, as Wittgenstein had concluded on the basis of cogent philosophical considerations in his `notorious’ 1937 paragraph, there can be no well-formed formula of PA which formally interprets in N as the assertion ‘PA is consistent’.

Sorry about that. I have to click on each comment to approve them. You submitted several comments in short time intervals. This makes it likely I'll accidentally skip one of them, thinking it's one I already looked at. I'm not complaining, I am just explaining what's happening on my end so you understand.

1. Theorems in standard textbooks and peer-reviewed journals are rarely wrong. They may, however, appeal to assumptions that later turn out to be untenable or false.

2. In the case of Theorem XI in Goedel 1931, it is the implicit assumption---first highlighted, according to Feferman, in his doctoral thesis---that [w] can be a well-formed formula in the first-order Peano Arithmetic PA. The assumption is untenable since it contradicts the published 2016 finitary proof of consistency for PA.

3. In the case of Theorem VI in Goedel 1931, it is the explicit assumption that PA can be assumed omega-consistent. The assumption is falsified by the published 2016 finitary proof of consistency for PA.

4. Whether the published 2016 proof can be refuted formally, or even whether it is simply wrong, is at present an open question.

5. Of course, neither objection holds in the first-order set theory ZF, in which both of Goedel’s so-called First and Second Theorems are formally provable over the finite ordinals. The assumptions 2 and 3 are not falsifiable in this case, since ZF has no finitary model.

1. The reason I mentioned Mendelson is because it is a modern(ish) text, with some assumptions, covering all such areas of logic. My edition from 1987 includes all earlier work by logicians on these matters.

You did not answer my question directly. So I will make a suggestion:

Mendelson introduces an assumption (called +):

"The Standard Model is a model of S" (ie PA)

Do you reject this assumption?

2. Theorem XI of Godel 1931 is another example of why that paper, although historically interesting, is not a suitable basis for a modern theorem. That theorem was not established formally in that paper as you comment. So later work (e.g. Mendelson? ) for an accepted modern proof of that theorem (see also below).

3. Theorem VI in Godel 1931 is only *part* of Godel's incompleteness theorem. This is made clear in Mendelson. In summary we have, for theory S and Godel sentence G:

(a) If S is consistent then not S |-- G

(b) If S is omega-consistent then not |-- ¬G (Thm VI)

In terminology terms G is "undecidable" if S is consistent and omega-consistent, but G "unprovable" only requires S to be consistent.

In Mendelson Proposition 3.40 (Godel's Second Theorem) says:

If S is consistent, then not |-- Con(S)

In words "S consistent implies the consistency of S is unprovable in S"

… or in the words of Godel ThmXI

"the consistency of P is unprovable in P"

There is no reference in the statement of this theorem, nor (directly) in its proof (either in Mendelson or Godel 1931), that S is required to be omega-consistent.

4. In your paper we have Theorem 6.8 which states (without a following proof):

Bhup Thm 6.8 "PA is consistent"

The claim is that this proof is via "finitary methods". So there is an ambiguity as to whether this is conflicting with Godel II. You seem to be suggesting that it does conflict.

(If it did *not* conflict this would mean that the your "finitary methods" use a slightly stronger proof technique than available in PA itself. Since Turing Machines get mentioned this could arise from the tacit use of an Oracle.)

You claim that omega-consistency is involved which would only be required to prove that Con(S) was "undecidable".

So the challenge is to identify the source of that conflict with Mendelson.

Now you also refer to "Aristotle's Particularisation" which is a De Morgan law. You also falsify this at one point, which really means that you are rejecting the (Classical) First Order Logic basis of PA.

This is allowed, but it does not necessarily reduce the conflicts (since Godel's theorems are constructive, etc).

Rejecting De Morgan does mean that your entire paper has "really" been a constructive logic argument. The use of Turing Machines also suggests that this is really intended to be a constructive logic argument. (Non-standard models of Classical PA are non-computable.)

5. Indeed lets not bring ZF into this, if we can avoid it.

Referring further to your paper, I have been trying to consider the first theorem 2.1. and its surrounding definitions. This theorem and its proof could be more explicit. For example what role is the godel beta function playing exactly in the definition of the algorithms AL(R,n)?

A formula for these algorithms would make the later discussion more sensible. As a simple example we get:

"For a real number R, the relation r(x)=0 is verifiable..."

but this is the first appearance of x - so what is it (a number, a function, a finite list, an infinite sequence)?

I’m reluctant to address the issues you’ve raised in this blog for two reasons:

1. The issues you’ve raised re my understanding of Goedel/Mendelson would be of little relevance---and, possibly, interest---to Sabine and the readers of this blog; particularly since I too consider the two editions that I have--- Mendelson 1964 and 2016---as appropriate introductory texts on mathematical logic. Why not email me instead for the clarifications you seek?

2. The objections you’ve raised re the paper published in Cognitive Systems Research---which addresses issues that texts such as Mendelson seek to seed---should be addressed to the journal in the appropriate form for letters/reviews/rebuttal article etc.

Bhup wrote:>The objections you’ve raised re the paper published in Cognitive Systems Research---which addresses issues that texts such as Mendelson seek to seed---should be addressed to the journal in the appropriate form for letters/reviews/rebuttal article etc.

Bhup, you know that won't happen!

I looked through more than two years of contents of Cognitive Systems Research: your paper was the only one I saw that reported on purported research in mathematical logic. (Yes, I saw the one referring to Łukasiewicz -- that paper reported on work in AI, not mathematical logic.)

Somehow, you got your paper published in a journal that is not accustomed to dealing with the subject your paper addresses.

I don't know why they published it: maybe they just liked the novelty, just as Social Text published the Sokal hoax.

In any case, I know, as I suspect you know, that the editors will not publish a response by Roy or me pointing out the errors in your paper. They obviously lack the expertise to judge that your paper is wrong: the journal specializes in articles on AI and neural science, not mathematical logic.

And, in fact, given how wildly wrong the paper is, the editor could reasonably expect that anyone working in mathematical logic can see for himself that the paper is nonsense.

For the same reason, it is not worth my or Roy's time to write a formal paper to refute your nonsense.

Your paper claims to show -- its most important conclusion -- that the Peano axioms for the natural numbers are "not ω-consistent." You add in a disparaging footnote: "This conclusion is contrary to accepted dogma."

Elsewhere you have claimed to prove that Gödel's incompleteness theorem is wrong and, indeed, that you have proved that Zermelo-Fraenkel set theory is inconsistent.

Well.

One of two hypotheses is true:

A) You are right, and thousands of very bright people who have devoted years and years, often most of their adult lives, to mathematical logic are just guilty of naively swallowing, as you say, "accepted dogma."

OR

B) You, an amateur in the field, have made some serious mistakes.

Can't you see that the strong presumption, from any reasonable perspective, should be in favor of B?

Roy and I know some of your errors. But you seem unable to bring yourself to see that the overwhelming presumption is in favor of hypothesis B and that you should presume that we have a point in suggesting to you your errors.

I suppose it does not really matter. If you want to waste your time on this, it seems no one is really paying any attention anyway.

But I find it sad that you will not face up rationally to the probability of whether A or B above is most likely to be true.

Until being mentioned here, I had not realized that Brian Greene had written another "popular" book, Until The End Of Time. I long ago stopped reading "pop" books, preferring to constrain my reading to mostly "technical" literature. Only recently have I begun paying any attention to "science" blogs, offering my limited perspective on a few occasions. What I have discovered is how some are willing to dispense opinions without a propensity to elaborate. So, returning to Brian Greene (as I have not read any of his "pop" material) what is it that is so offensive in his books ? If he were not a "celebrity" would he still evoke emotions such as "no one cared" ? Why not use "science" to unify, not to divide ?

My position is this: attack specific theories if you must, but refrain from attacking "personalities." I am not "cool" with attacking any individual. However, I am attempting to understand the use of the word "nonsense" to describe string theory. As with "Many Worlds Interpretation" (I emphasize the word 'interpretation.'), there is "science" to be found in both string theory and Hugh Everett's MWI-paper. Please understand, I am not fully advocating either theory--I am simply refusing to call either one 'nonsense,' because that word to describe them is untrue.

We can take it as given that Bhup is personally a very nice fellow -- most people are.

But Bhup chose to post here some information about work he has done, presumably to get some response.

We could patronize Bhup as many people do to a three-year-old who tries to "paint" something and just ends up with a mess.

But Bhup is not a three-year-old. The best way of showing him respect is to tell him clearly and forthrightly what we think of his work.

And anyone with any familiarity with mathematical logic can see his work is horribly wrong. I gave a simple sample above, and it would be easy to come up with many others if Bhup wants to continue the discussion.

Einstein made mistakes -- quite a few, in fact. Some he caught himself; others were caught by colleagues.

Einstein owed a debt of gratitude to those colleagues: they helped him in his honest pursuit of the truth.

Some years ago, I was giving a presentation to a dozen or so colleagues and managers on the system design for a new project. One of the engineers, a fellow named Norm, interrupted and suggested a problem with the basic design.

I arrogantly told Norm that anyone who really understood the technology knew this was not a problem.

And, then... for the next few minutes, as I continued speaking, I kept mulling over Norm's question.

Norm was right. The system as designed would not work.

So, I interrupted myself, explained to everyone that Norm was correct and that I had screwed up, and we needed to look for a workaround.

A few minutes of discussion and we figured out how to solve the problem and moved on.

Afterwards, Norm came up to me and said he really respected me for admitting I had screwed up. I pointed out to him that I really owed him: I went through a few minutes of embarrassment to avoid a real disaster on the project -- Norm had been a true friend.

Serious technical people want to be told when they are wrong. That is the greatest service your colleagues can provide.

That is not "attacking" someone. It is showing the respect due a serious adult professional.

Unfortunately, our society has turned away from the "let's find any errors" approach and replaced it with a "let's not hurt anyone's feelings" approach.

And so Boeing still has not fixed the 737 MAX!

It is neither kind nor respectful to refrain from telling the blunt, unvarnished truth to anyone who chooses to work in a STEM field.

Again, Bhup seems like a perfectly nice fellow. But his work in mathematical logic is simply, bizarrely wrong. No one should be inhibited in telling him that.

Warning to Greg, Alan, and all: that popularity is invisibly sponsored to the point that there are specific searches that you do on youtube (say), "quantum mechanics" that result in more than a year of Sean Carrol on your automatic list of YT "recommendations" and there is no way to get rid of it. Such a waste o real estate.

Ivan, you bring up a salient point. Much "popular" material which I have seen (say, Youtube) regards quantum mechanics can appear as a "waste of real estate." The interviews which I have read with Sean Carroll-- whenever he promotes his latest "pop" book--paint a portrait of quantum mechanics with which I disagree (however, my contrary opinion does not imply his opinion is wrong). On the flip side of the coin, you can watch Richard Feynman (or even listen to an audio by Willis Lamb) and be enthralled with a more palatable portrayal of quantum mechanics (in my opinion). As with papers deposited on the ArXiv, one can choose to locate the good material or choose to complain about the bad. As for Sean Carroll (whose popular material I have never read), perhaps his writing has inspired beginners to dig deeper or to enter the field--then, they will be in a position to offer their own insights. My point is that one can choose--on their own initiative--to turn the "waste of real estate" into a learning experience.

Greg Feild wrote:>I believe it is important to correct people who influence public opinion, public policy, AND who feed at the public trough.

Again, Greg, I fear you over-estimate your or my or anyone's ability to somehow focus our efforts on those "who influence public opinion, public policy, AND who feed at the public trough."

Could Sabine arrange that her book was read primarily by opinion molders and policy makers?

Better, I think, to say what one thinks is true, without fear or favor, and hope that good ideas win out.

As to those who "feed at the public trough," well, I think Upton Sinclair had a point when he said, "It is difficult to get a man to understand something when his salary depends upon his not understanding it."

I don't know that anyone here is "feeding from the public trough": I'm not, and I think Bhup is retired. But trying to convince those who do that they should stop is probably the least productive strategy of all.

My first sentence does not clash with my last. I am saying, since we know the actual universe is bigger than the observable universe, what reason do we have to assume the actual universe is not infinite?

"We say that a first order theory T is locally finite if every finite part of T has a finite model. It is the purpose of this paper to construct in a uniform way for any consistent theory T a locally finite theory FIN(T) which is syntactically (in a sense) isomorphic to T."DOI: https://doi.org/10.2307/2273942

As for infinitary and ordinal Turing machines, by Merlin Carl

Towards a Church-Turing-Thesis for Infinitary Computations

"We consider the question whether there is an infinitary analogue of the Church-Turing-thesis. To this end, we argue that there is an intuitive notion of transfinite computability and build a canonical model, called Idealized Agent Machines (IAMs) of this which will turn out to be equivalent in strength to the Ordinal Turing Machines defined by P. Koepke."https://arxiv.org/abs/1307.6599

So what ultimately do these infinitary (fictional?) vs. finitary machines mean for physics?

This takes us further afield. Transfinite numbers and the question of א ≤ c ≤ 2^א or the continuum hypothesis has a role with Robinson’s numbers. These are in a sense the reciprocals of transfinite numbers or the continuum. I am not sure what possible role this could have for physics.

"The present paper is basically written as a non-apologetic strong defence of the thesis that computation is part and parcel of a physical theory and by no means a mere numerical evaluation of the prediction of a theory which comes towards the end. Various general considerations as well as specific examples are given to illustrate and support our arguments. These examples range from the practical aspect to almost esoteric considerations but at the end, everything converges towards a unity of theory and computation presented in the form of modern fractal logic and transfinite quantum field theory in a Cantorian spacetime. It is true that all our examples are taken from physics but our discussion is applicable in equal measure to a much wider aspect of life."

Whether Gödel’s theorem is explicitly how nature functions or not seems irrelevant. This might be seen with QM in debates over whether the wave function is real or not.

Gödel’s theorem might be seen with Peano number theory of natural numbers. Given any number this the successor of that number can be computed. From this we can perform arithmetic on numbers. Based on an argument of induction we then say the natural numbers are an infinite set. However, we can’t compute this. The induction based “conclusion” there are an infinite number of natural numbers is an incomputable statement or theorem.

Back in my youthful college days of the 80s I thought it possible that Gödel’s theorem had something to do with the foundations of quantum mechanics. I read Douglas Hofstadter’s Gödel, Escher, Bach and was struck with the idea this might have some underlying reason QM has its limitations on the knowable state of a system. I then read an article by Wheeler, where he was making a similar suggestion. According to some story Wheeler was thrown out of Gödel’s office for suggesting this to him. This pretty much conforms to what I experienced, and as a somewhat clumsy undergraduate I was not able to pursue this. So, I pretty much abandoned it.

I think it can be said of any mathematics used in theoretical physics has no direct ontological meaning for physical reality. Physics is not a differential equation, nor is it a set of theorems true by self-referential inference. I am not a devotee of Tegmark’s idea along these lines. The relationship between physics and mathematics can only on the surface be said to be based on measurements giving numbers that represent quantities that have certain relationships. Beyond that any metaphysical relationship is hard to establish. It is in the realm of Garrison Keillor’s Guy Noir who, “searches for life’s unanswered questions.” I don’t see this any different for geometry, linear algebra, topology, p-adic numbers and ultra-metrics or with Gödel’s theorem which demonstrates there is no computable system for the entire p-adic set.

Yes a mathematics - physics ontology relationship is hard to establish. But there are non-trivial theorems underlying much of mathematical physics, which is suggestive. The fact that we have neither a complete physics theory, nor a clean mathematical description of those (especially the Quantum ones) that we do have does make it harder.

But one can still have "ontological working hypotheses". Naïve examples are "all physics is computable"; "all physics is geometric"; "physics(/economics etc) is fundamentally stochastic".

In traditional fashion one might then want to construct an "experiment/measurement" (e.g. bending of light rays, …) that validates one aspect of this idea. One then needs to try and try again (admittedly the direct path from this which led to the Asymmetric Geometric theories doesn't seem to have been validated). So maybe a more sophisticated mathematical ontology scheme is called for.

I don't find the Tegmarkian "Mathematical Multiverse" convincing at this stage, however.

John Fredsted is correct. Gödel sentence is true but unprovable. The proof of its truth does not come from "outside" the system. It comes from metalanguage of arithmetic, which Gödel showed to be subsumed *in* arithmetic -- i.e. assertions *about* arithmetical statements can themselves be expressed arithmetically. Briefly, using Gödel's numbering, one establishes correspondence between language and metalanguage and then and proves that a given arithmetical statement true in metalanguage of arithmetic, must be true in the language of arithmetic and vice versa. He then constructs G, which in metalanguage says "G is not provable in arithmetic" and demonstrates that G is indeed not provable in arithmetic (because it would be provable iff it were not provable). So G is true but unprovable. Philosophically, the upshot is to separate provability from truth.

For an excellent and highly approachable account see "Gödel's" proof (2nd edition) by Nagel and Newman.

MLA, that is what I call "knowledge porn". It is not even a charming logical error of the sort children commit with "I already dood it". It is just gross.

I am afraid I am in no condition to write 10 or 12 paragraphs about it. I would like to remind people that Godel never had and there will never exist a "Godelian" state space that is coherent with superdeterminism as ""it" (what is there of "it") allows the existence of information betrayal by the... state space.

Your can't confuse the left side of the SE Code with the right side, to begin with. But Godel insists that "his" the right side is exactly the same as the right side... in order to discover that a meta arithmetical "G" is not provable in in arithmetic.

PhysicistDave, some help about equality would be much appreciated just about now, even as I steal your sentence:

"To take a concrete example, that would mean that when we assert that there exists an x such that x=x, we really mean that for some x, x is not equal to x!"

That is the same as confusing "proto telepathy" with "communication". If they were the same there would be no need for the existence of the proto- and "true telepathy" would exist as communicaton. It doesn't.

Superdeterminism does not come into the simple point that G is true, while being unprovable.The mistake people often make is to read Gödel as establishing a third truth value alongside "true" and "false", whereas what he showed was that in mathematics provability and truth are not synonymous.

As a more general point, calling something "knowledge porn" does not amount to an argument. :-)

Ivan from Union wrote to me:>PhysicistDave, some help about equality would be much appreciated just about now, even as I steal your sentence:

>"To take a concrete example, that would mean that when we assert that there exists an x such that x=x, we really mean that for some x, x is not equal to x!"

>That is the same as confusing "proto telepathy" with "communication". If they were the same there would be no need for the existence of the proto- and "true telepathy" would exist as communicaton. It doesn't.

Ivan, I have no idea at all what you are trying to say to me! I cannot parse your sentences.

If I can make clearer my point about Bhup, what he is saying in his papers is simply bizarre nonsense, and I just gave one particularly simple example of that nonsense.

He has learned some of the lingo of mathematical logic without really understanding it, and then twisted it around in almost psychedelic ways.

The real tell-tale is that Bhup claims to have made truly stupendous discoveries -- including having disproved Gödel's incompleteness theorem! -- that would, if true, literally make him the most famous living mathematician. Indeed, Bhup would quite literally have surpassed Gödel himself, especially since he would have proved that Gödel's most important work was wrong.

Needless to say, Bhup has not actually done that.

I urge anyone interested in Bhup to google his papers and glance through them. Extremely bizarre and delusional claims.

What strikes the (applied) mathematician as odd is when it is said "nothing real is infinite" while physics itself as currently written extensively uses infinitary mathematics (e.g. "the concept of quantum configuration space should be introduced as a suitable enlargement of the classical configuration space so that an infinite-dimensional measure, often a cylindrical measure, can be well defined on it" - Wikipedia:Quantum_configuration_space; "A conformal field theory is a quantum field theory that is invariant under conformal transformations. In two dimensions, there is an infinite-dimensional algebra of local conformal transformations" - Wikipedia:Conformal_field_theory; etc.).

"What strikes the (applied) mathematician as odd is when it is said "nothing real is infinite" while physics itself as currently written extensively uses infinitary mathematics ..."--Philip Thrift

Disclaimer: I'm not an applied mathematician, except in the sense that everyone is, but I did apply mathematics to engineering problems and have a lot of respect for that activity. In a numerical analysis text I learned that finite difference equations of the same form as differential equations (e.g., second-order with constant coefficients) have the same forms of solutions. E.g., if e^x is a solution of the differential equation, e^(idx) (where dx is the difference increment) is a solution of the finite-difference equation.

That clarified for me what I expected, that infinitary math is the limit of finitary math as the descrete increment goes to zero, and works well as an approximation of the finitary math. Therefore we can use differential equations to estimate the stresses and strains in a bridge without worrying about the specific sizes of the grains in the alloys that the girders are made of.

Calculus isn't necessarily "real" as a description of nature, but it is very, very convenient. That is why it is used so much. You're probably aware of this (although some readers may not be), and perhaps your point is that we risk confusing ourselves by automatically using infinitary methods without exploring finitary solutions.

(I was also thinking of commenting on Dr. Hossenfelder's "nothing real is infinite", as being posthumous vindication of Zeno, but I thought better of it. Couldn't resist the second time I saw it, though.)

Conventional (infinitary, like beginning with the usual college calculus) mathematics may have been used traditionally in physics, but Max Tegmark here may actually be right. It seems the most reasonable approach, anyway:

"Not only do we lack evidence for the infinite but we don’t need the infinite to do physics. Our best computer simulations, accurately describing everything from the formation of galaxies to tomorrow’s weather to the masses of elementary particles, use only finite computer resources by treating everything as finite. So if we can do without infinity to figure out what happens next, surely nature can, too—in a way that’s more deep and elegant than the hacks we use for our computer simulations."

"Our challenge as physicists is to discover this elegant way and the infinity-free equations describing it—the true laws of physics. To start this search in earnest, we need to question infinity. I’m betting that we also need to let go of it."

T.D. Lee tried his hand at discretizing physics (1983, Time as a Dynamical Variable), writing: "Our usual concept that all physical fields should be embedded in a continuous space-time manifold may well be only an approximation..." (CU-TP-253, pdf). What does Max Tegmark have to offer that is different ? How does physics advance with the elimination of "infinity" ? How does Max Tegmark define words such as "elegant" and "true" ? Geroch and Hartle also touched on pertinent issues in their paper "Computability and Physical Theories" (originally 1985, ArXiv 1806.09237). They conclude: "quantum gravity does seem to be a serious candidate for a physical theory for whose application there is no algorithm." While Geroch and Hartle published their original paper in 1985, and posted it to the ArXiv in 2018, nothing has changed since to alter their conclusion.

The consequences of the Halting problem are not restricted to an infinte set of algorithms but apply for a single given algorithm as well.If you want to prove that a single algorithm will finish running, you cannot know whether you will ever find such a prove.If that makes undecidability relevant for physics I don't know.

Real Analysis Sabine which uses the delightful term 'Countably Infinite' ; say the inferior of a sequence of Blocks (B) which reduces at a comfortable tempo of B/2 AND not forgetting our beautiful compact e (the natural log) ... we can never imagine nature as "stopping" so neither should our tools!

Toilet Paper solution, possibly.... not a comment for this post. Saw your twitter post and I do not do twitter but have a suggestion, Maybe if you try a restaurant supply store you might be able to find something. Restaurants have the cheap stuff so maybe you can get something in one of those stores if any exist nearby... just a thought.

Godel results and their extensions can be considered as an antidote against theoreticians complacency and hubris, these results are extremely relevant for natural Sciences and Mathematics itself.

As Godel results imply you may extend your axioms with new irreducible ones but that will not guarantee that you model is complete, but this theoretical result only reaffirms the empirical nature of the scientific method: Reality unlimited source of irreducible principles can only be discovered by constant observations/experiments; the process of finding new irreducible principles in Nature is never-ending.

But not only that Godel results and their extensions imply that "complexity" is a source of incompleteness; or as physics P. Anderson said "More is Different". The classical limit of Quantum Mechanics is a clear expression of this notion, Identity(the possibility of uniquely identify objects) is a classical emergent property, measurements are only possible when uniquely identifiable markers are present, hence the "measurement problem" in Quantum Mechanics is just an expression of the emergent nature of Identity.

Physical and biological complex systems emergent properties are a manifestation of these results and without directly observing these complex systems these emergent(irreducible) properties will be unknown to us independently of any "effective" theories. Naive reductionism is intrinsically flawed.

P. Anderson's "I see the Theory of Everything as the theory of almost nothing" is perhaps an indirect acknowledgement by a physicist of how deeply relevant are Godel results for natural sciences.

Gödel's theorem of incompleteness is historically important but useless in practice as you specify. First, it does not indicate any concrete problem apart from the internal consistency which would be affected by the incompleteness and leaves open the possibility of finding a solution by adding axioms. This is why the resolution of the "Entscheidungsproblem" by Turing and Church is much more interesting. There is a non-solvable problem of interest, the halting problem and no stronger system of axioms can exist by the Church-Turing's thesis. This thesis was never contradicted yet, even with the power of quantum machines.To believe that it is not a question of real constraints on computability because concerning only the problems of infinite sizes is erroneous because it is not a question here of current infinity but indeed of inductive infinity. Let me explain, taking the simple case of Matiyasevich's theorem proving the inexistence of an algorithm for general resolution of diophantine equations.Let be a big diophantine equation F of 30 variables and 100 terms that I want to solve by setting F = 0. There is a solution or not to this problem… unfortunately, this question is undecidable. However, there is a very simple algorithm to try to answer it: examine all the possibilities one by one. Unfortunately, this solution requires to halt at a given moment because the infinite is long, far too long ... The arbitrary stop before solving the problem leaves doubt: maybe I stopped just a little too early. The problem is not the infinite size of the solution but the arbitrariness of its size.Gödel himself in 1956 in his famous letter to Von Neumann understood that the question of incompleteness should be limited to the concept of reasonable size. Unfortunately, the problem remains, the number of combinations to be examined being exponential according to the size of the numbers, the incompleteness always seems insurmountable although it is an open question: are we NP-incomplete or not .

P.S.: The fact that you think incompleteness is not about physics is interesting, but let me tell you it is very much about industry. As long as we try to carry out minimal operational research, we hit the wall of NP-completeness and fall back on heuristics, trying to minimize the error. When money is at stake and you are trying to maximize the gains or minimize the losses, the interest is extremely high. Since the beginning of the year, I have already had to face two NP-complete problems.

"We suggest that [a theory having measurable numbers that are not computable] should be no more unsettling to physics than has the existence of well-posed problems unsolvable by any algorithm been to mathematics."

I find this just wrong (it should be "more unsettling"), but maybe (at least these) physicists live in a different world that me.

"But, in the spirit of many of John Wheeler’s papers, we aim more to raise issues and stimulate discussion than to state conclusions."

It all depends on where you decide to start. If you take physics as we know it as the reference, your points about the irrelevance of the known theoretical limits might very well be justified. On the other hand, you can start with math and theoretical CS as being as close to the ultimately cognoscible truth as you can get and treat physics merely as a set of statements that happen to have been expressed within these frameworks. After all, physics is just an art of making useful approximations that are (or should have been) being regretlessly replaced with better ones when necessary/possible.

Put differently, knowing the limits imposed by computational complexity one can also pose an opposite question: what is the set of questions Nature can answer efficiently? P is the obvious answer, but Quantum Mechanics suggests that this answer is incomplete -- a wider class called BQP might approximate the truth better. But is it the end of the story? I doubt it.

To rephrase the same in laymen terms: is it possible to create a finite setup of physical entities and "ask a question" (make a measurement) that would make reality stutter/slow down somehow?

Nature doesn't like being asked too complex questions and actively defends itself against it. Currently you (physicists) try to address that problem via decoherence etc., because it is the toolkit you have and you should not be blamed for that. But the answer might turn out to be much sublter and come from complexity theory (pure math), not Quantum Mechanics or thermodynamics in a way similar to what Harry Nyquist has done in the field of signal processing.

piotrw wrote:>But the answer might turn out to be much sublter and come from complexity theory (pure math), not Quantum Mechanics or thermodynamics in a way similar to what Harry Nyquist has done in the field of signal processing.

Well, it could be. An awfully large number of physicists have had thoughts along that line, you know, going back at least to Brillouin's interest in information theory to guys currently like Bousso, Seth Lloyd, and the whole "It from Qubit" crowd.

As far as I know, no one has ever succeeded in really turning such speculations into a new theory of physics that makes novel, testable predictions.

I myself first got interested in information theory fifty years ago because of its possible connection to physics. I never got any significant physics out of it, but, fifteen years afterwards, I found out that private industry was willing to pay me good money for using some of that knowledge.

Everything I ever learned hoping to make a living off of it never paid off; everything I have ever been paid for was stuff I learned just for fun. Must be a lesson in there somewhere...

And, yeah, Nyquist, and his colleagues such as Bode, Shannon, Hartley, et al. are among the great unsung heroes of history: more than any other small group of men, they deserve credit for the digital electronic world we now live in. Yet, few among the general public have heard of any of them! History is ironic.

Is the movie Groundhog's Day accurate or not? It is not a philosophical question. Nor it is a question of the energy required to calculate states X time into the future or the limit. It is simply a question of what would happen physically. If we were to replace all fields/particles to their energy states, positions, and relative velocities at 6 a.m. this morning, would the resulting universe be in identical future states every time?

I hope it is not too presumptuous of me to attempt an answer, but it might have some small entertainment value in this time of the pandemic.

No the movie is not accurate. There are two cases I can think of: a) quantum mechanics contains randomness; in this case small random changes would add up (slowly) to produce some differences in future states; b) QM is deterministic; in that case there would be no mechanism for Bill Murray to wake up with memories of re-experiencing that day, over and over. All his memories, like everything else, would be the same every day.

Dr. Hossenfelder is considering the second case currently, but as a hypothesis which is worth testing, not as known science. Science does not know which of the two (if any) is true. So my answer is necessarily based on analyzing both cases. In addition, I can cite this empirical observation: the vast majority of movies I have seen (approximately all of them, but I hedge in case there is one good one that I am not remembering--and don't get me started on "The Usual Suspects" or "The Da Vinci Code" which wind up destroying their starting premises) do not make sense scientifically or logically, even those supposedly "based on true events". A movie can be enjoyable for other reasons, though.

Philosophically I could accept either case, but if I had to bet, I would bet that there is some randomness, based purely on my limited experience as a universe-builder of trivial computer games in my spare time. Some randomness makes such universes better, in my experience. (Although it might just be pseudo-randomness.)

I just came across a paper that may be relevant to this discussion (Space-bounded Church-Turing thesis and computational tractability of closed systems, PRL 2015).

From the introduction : "According to our postulate, bounded memory physical systems should not exhibit non-computable phenomena even in the infinite time horizon. As evidence for our postulate, we rigorously prove that for compact noisy systems, the noncomputable phenomenon is broken by the noise even inthe infinite-dimensional case."

Obviously, this is a basic theorem of the theory of automata (196x). A bounded MT does not generate the recursively enumerable set. To detect if it will halt, it suffices to have a larger MT stocking all the the states formula (Turing called "the state formula" includes both the current instruction and all the symbols on the tape) of the calculation and detecting if the machine returns to a state that it has already traveled, in this case only it loops.

This is the problem, the introduction is already false : "According to our postulate, bounded memory physical systems should not exhibit non-computable phenomena even in the infinite time horizon. As evidence for our postulate...". Because obviously not exhibiting non-computable phenomena.

Compared to the classical work on finite automata, one major difference is that they have a notion of "bounded memory" that applies also to systems with a *continuous* state space: see equation (1) on page 2.

A bound of S(n) on the amount of memory used by a computation means that the machine may be in at most 2^S(n) distinct states. If the computation is deterministic, this imposes a natural hard limit of 2^S(n) on its computation time: the computation either terminates in 2^S(n) steps, or ends up in an infinite loop.

[Basic theorem mentionned]

...

At page 4

Then the invariant measure of the noisy system Se can be computed with a given precision 2^-n in SPACE(poly(log 1/e) + poly(log n)). This statement implies, in particular, that the longterm behavior of noisy systems at scales below the noise level is computable in time quasipolynomial in n. Intuitively, this means that, at the right scale, the behavior of the system is governed by the efficiently predictable micro-analytic structure of the noise, rather than by the macro-dynamic structure of the system that can be computationally difficult to predict.

[I don't know if the "macro-dynamic structure of the system" is not the interesting thing in the majority of cases]

Dr. H., thank you for the explanation on another thread, as this is 3 or 4 days old and didn't enter. I ended up thinking you did find it too megalomanaical and ambitious after all.

Just an add-on to something I said above.

•••

“I will trying to synthesize a middle ground between, Sabine, Tim, and my own ideas in a way that is hopefully satisfactory to all three of us. They are very much the same ideas in different chocolate sauces, frankly!”:

In retrospect, this sounds megalomaniacally ambitious, and I meant something short and simple in plain English. There is a clearing where Tim, Sabine, and I can agree on the skeletons of ideas… hold the chocolate sauce.

What Tim thinks of the strange attractor’s role in certain computations is what I call two 3-d planes rotating (or not) along a line in a hypercube and that make a difference between entropic mass and gravitational mass.

What we collectively (you too, readers) call “local variables” assumes an •integer locality• that is classical… when even the adjective/adverb “primeness” of a prime in prime state space gets distributed to the state space and disassembled from its own number; it is no longer in possession of its own entire coordinates, which are also a part of the state space. No wonder there are hidden non-local variables!

In SE Code, binary bits get hidden until you do further testing with a different time anatomy than in the previous model. And those tests consist of and make up the partial present tense of the bubble sort, whose collective past can not determined -since if could, you could •undo• a bubble sort; that requires naive universal conspiracy, of course, just turn universal time backward and…. You will still NOT get that data because it’s not a part of the architecture of the •local• bubble sort time architecture, which is permanently one directional. Different time crystal!

Leaving a future analysis in the air, I refer you to t’Hooft (gorgeous, short item):

https://www.frontiersin.org/articles/10.3389/fphy.2018.00081/full

Maybe we can see how clear he can get (which is a lot more than I) with what he is saying -very much conditionally, on an experimental basis from what understood- about reversibility and symmetry on the time line: whose time line? Local, space state, or Universal?

Sorry. As I said, I sometimes just miss a comment. If something doesn't appear that you think should have, either post a brief comment *explicitly addressed at me* (I may not read it otherwise) or send me an email to hossi[at]fias.uni-frankfurt.de

Seemingly impossible computing of course has been proposed for many years - the UCNC (on unconventional and natural computers) conferences, slime moulds that efficiently solve the traveling salesman problem, etc. - and now the buzz surrounding "MIP* = RE" for quantum computers.https://arxiv.org/abs/2001.04383

There is a class of events, which, while very rare in the overall scheme of things, is relevant to whether physics proper is complete in the sense of Gödel. That is, physical theory is mute on the consequence of significance, the extent to which what physically happens next is contingent upon the exchange of symbolic tokens between highly complex dissipative systems.

Does American analytic philosopher W. V. Quine’s thesis of the indeterminacy of translation have relevance to physical theory? Do some physical events have a significance or meaning that initiates a cascade of meta-level consequence ranging well beyond the physics of the momentary event? Well, yes to that last statement.

The question is whether there is a physical theory that establishes a one-to-one equivalence between the two. If here I type the word, “Gavagai,” what happens when it is read? One thing? Many things? Is there a mathematics of exacting determination?

I would like to make a few comments about the first part of the Sabine's original post. Generally it presupposes some philosophy about how mathematics and physics might interact - a philosophy that subsequently needs modification/clarification in the discrete Turing- Godel world.

Take for example:

"if we had any statement that was undecidable, we’d decide it by making a measurement,..."

To respond to this we note that "undecidable statements" tend to be universal statements (and thus contain an in-built "infinity"). An example would be the Goodstein sequence function G(m), whose termination - for all numbers m -is proven independent of Peano arithmetic (ie Undecidable).

So if G(m) had physical relevance, then what is independent about it is the statement : "For all m. G(m)=0"

So how is that statement to be measured?

Clearly one can conduct a finite series of measurements m1, …, mk and so determine G(m1), …, G(mk) and potentially find that they are all zero. But that does not establish the physical truth of the statement:

"For all (physical) m. G(m) = 0"

What would be required physically would be some physical bound M and then test all physical m<M.

(a) If we found for some m' that G(m') =/= 0, then we might conclude that "Physics disproves Peano arithmetic" (a bit like "GR disproves Euclid?")

(b) If we found "For all m<M, G(m)=0" nothing has been established about the undecidable axiom "For all m, G(m)=0" and it might get deemed physically irrelevant.

(c) There is a further case which arises here, which is that for some m'' G(m'') does not converge. This would decide the independent statement against Peano arithmetic like in case (a). But *how* is this to be measured?

So the idea that undecidability can be resolved by measurement does not work. Or at least more needs to be said.

What might work, is a measurement which establishes a mathematically/logically unexpected result (assuming that one can phrase the physical theory in appropriate Godel-Turing like terms).

Related points can be made about the Wang Tiles, which also relate to some of Penrose's points in this area.

Given a set of (suitable) Wang Tiles, and an algorithm, there will come a Tiling mis-step. With the Penrose tiles, and an algorithm, there will always be a mis-step.

So yes there are various infinities involved in formulating all this in Logic, but they don't interfere with what might and might not be describable using Algorithmic theory. Some aspects of this "Algorithmic theory" might be testable by measurement, and for undecidable statements provability might not be possible (by direct measurement), although disprovability might just be.

So a larger question remains as to whether it is possible to formulate (current - or any) Physical theories in a way which is amenable to this analysis, including a way to "measure" non-termination.

Hi Sabine, I find you interesting. In a sense a breath of fresh air. The last 3 months have been are lesson on unpredictability. Now we always new something like Corona virus would happen, we just never know the day or the hour. If the earths population grew at one percent for 12500 years, there would be more mass of people, then the whole mass of the universe, so we know that calamities always happen and frequently. We just never know what is next or where is next and how they are grouped. The goal is just to deal with them as they happen, because we can never have enough resources to possibly prepare for all of them. Just be ready to adapt.

◊ Chaos is the maximum difficulty. It can be understood as an incompressible sequence of numbers.

◊ That is, the information contained in such a sequence is equal to the sequence itself. Outside, such a world is determined (the sequence repeats), and from the inside (for the subjective observer) he is no longer “pseudo”, but truly random.

You write: "Physical theories also come with a prescription for how to identify mathematical structures with measurable quantities." No such prescriptions ever occur! It's the experience and judgement of the physicists that determines how to actually measure an theoretical observable.

But isn't the maths that describes the most human-intuitively fundamental physical theories (counting objects, physical space, physical time) simply one level of abstraction from our physical intuition and therefore intrinsically contains prescriptions e.g. I know to check Pythagoras' theorem as a physical theory by measuring the width, length and diagonal of a room because that's where the abstraction of a triangle came from in the first place in one step of abstraction, and I know how to make those measurements with a metre rule because that's where the intuition of commensurable lengths came from in the first place with one step of abstraction.The theories, the prescriptions and the human intuitions build up in lock-step to the point where observing a Higgs field is prescribed by the mathematical structure. All physicists who had learned the hierarchy of theories and prescriptions, which they would agree on due to having the same human intuition, agreed on exactly what it meant to observe the Higgs field as clearly as they would agree on checking 1+1=2 as a physical theory about cups.

You write: "What about unpredictability? Well, quantum mechanics has an unpredictable element, as I explained in an earlier video. But this unpredictability is rather uninteresting, since it’s there by postulate." Gleason's Theorem shows that the pure states on the standard quantum logic are determined by equivalence classes of non-zero elements of the Hilbert Space. This doesn't have to be assumed!

Comment moderation on this blog is turned on. Submitted comments will only appear after manual approval, which can take up to 24 hours. Comments posted as "Unknown" go straight to junk. You may have to click on the orange-white blogger icon next to your name to change to a different account.