Guessing the Truth

Warren Hirsch was a mathematician who started his career in the US Army, and much later retired as a professor emeritus of NYU’s Courant Institute of Mathematical Sciences. He is well known for many important results in diverse fields, from combinatorial algorithms, to mathematical models of epidemiology, to his famous geometric conjecture. The latter became widely known as the Hirsch conjecture.

Today I want to discuss how well we are at guessing the truth of mathematical conjectures. Hirsch’s famous one was recently disproved by the discovery of a counter-example. For a summary of the state-of-the-art until the recent result look at An update on the Hirsch conjecture by Edward Kim and Francisco Santos.

I often have conversations with colleagues on how well mathematicians do at guessing the truth of an open problem. My colleagues often claimed that they are pretty good guessers. I think that is not quite true; they certainly do not guess perfectly. One of my main beliefs is P=NP is could be possible, so is every one else a better guesser than myself? I guess so.

In one of Tom Cruise’s movies, A Few Good Men, in the climactic courtroom scene, he yells

“I want the truth!” to his opponent, Jack Nicholson, who answers, “You can’t handle the truth!”

I am interested in how well we can guess the truth—I am sure we can handle it.

Guesses and the Truth

The Guess: The great David Hilbert thought in 1900 that resolving whether the numbers

are transcendental would take a very long time. He thought such questions might be harder than the Riemann Hypothesis. This is his seventh problem, from his famous list of problems.

The Truth: All these numbers and more were proved to be transcendental in 1934; the proofs were found found independently by Aleksandr Gelfond and Theodor Schneider. The methods they used were clever, but were based on already known technology used by Charles Hermite in 1873 to prove is transcendental.

The Guess: David Hilbert also believed there might be a proof that arithmetic is consistent. He proved partial results, and it seemed that the problem might be resolved in the near future. This is his second problem, again from his famous list.

The Truth: Gödel’s results, especially his Second Incompleteness Theorem showed that this was impossible. Consistency is unprovable in arithmetic unless it is actually inconsistent.

The Guess: Warren Hirsch in a letter, dated 1957, to George Dantzig, the inventor of the simplex method for linear programming, first stated his conjecture. Hirsch conjectured a very sharp bound on the diameter of the edge-vertex graph of an -facet polytope in -dimension space. The bound was conjectured to be . For a very long time this was believed to be true, and most work was on getting better upper bounds. The best known to date is the beautiful quasi-polynomial bound due to Gil Kalai and Daniel Kleitman.

The Truth: In May 2010, Francisco Santos, from the University of Cantabria announced he had found a counter-example to the conjecture. Roger Brocket, a famous control theorist at Harvard, once told me:

It is hard to argue with a counter-example.

Santos’s result is big news, and hopefully will lead to further insights into the structure of polytopes.

The Guess: Subhash Khot conjectured that his Unique Games problem is NP-hard. A huge body of lower bounds have been built up on this beautiful conjecture. I have discussed it recently here.

The Truth: Sanjeev Arora, Boaz Barak, and David Steurer have proved that the conjecture is unlikely to be true. They have proved that all Unique Games can be solved in time

where . This shows that the Khot’s problem is not nearly as computationally hard as many have thought. Of course it does not rule out that Khot is right.

The Guess: Euclid stated his famous Fifth Postulate circa 300 BC. For almost two thousand years mathematicians, professional and amateur, searched for a proof of Euclid’s Fifth Postulate. They all believed it should be provable from his other axioms.

The Truth: For two millennia they searched for a proof that did not and could not exist. I just read a book on this problem and its history, by Jason Bardi. It is a quite fun read, and I would recommend it to you. One of the lessons of the search was that for a very long time none seriously thought that the postulate could be unprovable, and none believed looking at geometry without the fifth made sense. Even the great Carl Gauss was reluctant to announce results he had found on non-Euclidean geometry.

The Guess: The famous George Pólya conjectured that most numbers have an odd number of prime factors. He made this conjecture in 1919, and it is based on a very reasonable model of numbers.

The Truth: In 1958 Brian Haselgrove showed it was false below some of size around .

The Guess: Euler conjectured in 1769 that there were no positive solutions to the equations

when . This is essentially a generalization of the famous Fermat’s Last Theorem—set and , for example.

The Truth: Leon Lander and Thomas Parkin in 1966 found a counterexample for :

Later in 1986, Noam Elkies found one for :

The Guess: Virginia Ragsdale conjectured an upper bound to the number of possible arrangements of real algebraic curves embedded in the projective plane. She made this conjecture in the early 1900’s. This was again related to one of Hilbert’s problems—the sixteenth.

The Truth: In 1979 Ilya Itenberg and Oleg Viro produced counter-examples.

The Guess: Erik Zeeman conjectured that one cannot untie a knot on a four-sphere. I am not a topologist, but it seems like a reasonable conjecture.

The Truth: After trying to prove this for almost ten years, one day he worked on the opposite direction, and solved it in hours.

The Guess: John von Neumann conjectured that a topological group is not amenable if and only if contains a subgroup that is a free group on two generators. While von Neumann’s name is connected with this conjecture, not all agree he believed it to be true. In any event many people did believe this conjecture was true.

The Truth: The conjecture was disproved in 1980 by Alexander Ol’shanskii.

The Guess: The conventional wisdom was that bounded width programs could not compute the majority function, and many other functions. The rationale was that how could a constant number of bits “encode” the number of ‘s in the input?

The Truth: David Barrington’s famous theorem proved that bounded width computations can simulate , and thus compute the majority function. It is one of the big surprises in complexity theory. While not the hardest proof, it is one of the most surprising results. The brilliance of David was, I believe, two-fold: first looking for an upper bound not a lower bound, and second restricting the bounded width computations to use only group elements. I definitely guessed wrong and worked hard on the lower bound. Oh well.

The Guess: Most believed that nondeterministic logspace () is not closed under complement. It seemed very hard to show that a vertex is not connected to another vertex . The obvious way to demonstrate this is to find a “cut” that separates from , but a cut could be huge. Thus the conventional wisdom was that logspace did not have the power to show there is no path from to .

The Truth: Neil Immerman and Róbert Szelepcsényi proved that is closed under complement. They did not find a cut. Instead they showed that how to count the number of vertices reachable from by an inductive argument. A very clever result.

The Guess: P is not equal to NP.

The Truth: Who knows?

Open Problems

How well do you think we are at guessing the truth of an open problem? See this for Bill Gasarch’s view of this issue.

I have one specific question concerning the new counter-example to the Hirsch conjecture. Is there any hope to build an amplifier for Santos’s bad polytope? I mean can one take his polytope and “tensor” it together and get one of arbitrary dimension that also violates the conjecture? This is not my area so I apologize ahead if the idea is too silly, but as you may know I find the idea of amplifiers very intriguing.

Share this:

Like this:

Related

A few comments. Santos counter example can be found here: http://arxiv.org/abs/1006.2814 . A very short (5-6 sentences) presentation of the quasi-polynomial upper bound in a very general setting is described here: http://gilkalai.wordpress.com/2010/06/19/the-polynomial-hirsch-conjecture-the-crux-of-the-matter/
There were many people who believed the Hirsch conjecture and many who disbelieved it and many who both believed and disbelieved. There was some evidence against the conjecture (many natural extensions failed including to unbounded polyhedra that Hirsch himself conjectured), but the positive direction was very tempting to various attacks. Also Santos’ paper contain some amplification constructions. (Those still leave possible that the diameter has a linear upper bound.)

Regarding the general issue, giving out limmited ability to guess the truth (which is often just guessing the value of a single bit) I find it an interesting issue how to discuss the truth in a way which has academic/scientific value. For many questions we should be prepared to the possibility that we will simply not have a clue, … until we will know.

The Zeeman story surprises me, since it’s quite easy to show that every 1-manifold embedded into four space can be unknotted. You just use the fourth dimension to undo crossings in a diagram at will. Maybe the conjecture has been misstated. Or perhaps this was in early enough days that many things that seem obvious now were not then.

I’m curious whether it’s reasonable to say that in general, when you tensor something with itself many times you are amplifying it. Perhaps that’s being a bit too crude with the term.

The result is about unknotting 2-spheres in 5 dimensions. Unknotting “knots” (i.e., embeddings of the circle S1) in 4 dimensions is indeed trivial. More generally, unknotting k-manifolds in n dimensions is trivial for n > 2k + 1; Zeeman improved that to unknotting Sk in n dimensions for n > 3/2(k + 1).

Also note that there’s a completely different conjecture called the “Zeeman Conjecture”, which is open (and with no strong consensus guess).

I don’t think any of these conjectures (with the possible exception of Gödel’s incompleteness) were anywhere close to as shocking as P=NP would be. For instance, although as a topologist I certainly believed the Poincaré Conjecture very strongly before Perelman’s proof, I would still not have been as shocked by a counterexample as I would be by P=NP.

I’m not sure I would be shocked by P=NP, but I would definitely be shocked by a sub-quadratic time algorithm for SAT. The NTIME[n*polylog(n)] vs. DTIME[n*polylog(n)] theory has much the same “shape” as NP vs. P, but doesn’t get nearly as much press.

It is possible to gather reasonably convincing support for a conjecture by a variety of means, long before it is actually proven, though many mathematicians are reluctant to argue too strongly based on such support due to the lack of rigour or the risk of embarrassment in hindsight. Examples of support include:

* Numerical evidence; but one has to be careful in situations where the null hypothesis would also give comparable numerical evidence. The first ten trillion zeroes of zeta on the critical line is, in my opinion, only mild evidence in favour of RH (the null hypothesis may be, for instance, that the zeroes go haywire once log log t becomes sufficiently large); but the numerical data on spacings of zeroes is quite convincing evidence for the GUE hypothesis, in my view. (It is a priori conceivable that the spacings are distributed according to GUE plus another correction that dominates when log log t (say) is large, but this begins to strain Occam’s razor.)

* Non-trivial special cases. But it depends on how “representative” one believes the special cases to be. For instance, if one can verify low-dimensional cases of a conjecture that is true in high dimensions, this is usually only weak (but not entirely insignificant) evidence, as it is possible that there exist high-dimensional pathologies that sink the conjecture but cannot be folded up into a low-dimensional situation. But if one can do all odd-dimensional cases, and all even-dimensional cases up to dimension 8 (say), then that begins to look more convincing.

* Proofs of parallel, analogous, or similar conjectures. Particularly if these proofs were non-trivial and led to new insights and techniques. RH in function fields is a good example here; it raises the hope of some sort of grand unified approach to GRH that somehow handles all number fields (or some other general class) simultaneously.

* Converse of the conjecture is provable, and looks “optimal” somehow. One might be able to construct a list of all obvious examples of objects with property X, find significant difficulty extending the list, and then conjecture that this is list is complete. This is a common way to make conjectures, but can be dangerous, as one may simply have a lack of imagination. So this is thin evidence by itself (many false conjectures have arisen from this converse-taking method), but it does carry a little bit of weight once combined with other strands of evidence.

* Conjecture is ambitious and powerful, and yet is not immediately sunk by the obvious consistency checks. This is vaguely analogous to the concept of a “falsifiable theory” in science. A strong conjecture could have many powerful consequences in a variety of disparate areas of mathematics – so powerful, in fact, that one would not be surprised that they could be disproven with various counterexamples. But surprisingly, when one checks the cases that one does understand quite well, the conjecture holds up. A typical example here might include a very general conjectured identity which, when specialised to various well-understood special cases, become a provable identity – but with the identity in each special case being provable by very different methods, and the connection between all the identities being mysterious other than via the conjecture. The general conjecture that the primes behave pseudorandomly after accounting for small moduli is an example of such a conjecture; we usually can’t control how the primes behave, but when we can, the pseudorandomess heuristic lines up perfectly.

* Attempts at disproof run into interesting obstacles. This one is a bit hard to formalise, but sometimes you can get a sense that attempts to disprove a conjecture are failing not due to one’s own lack of ability, or due to accidental contingencies, but rather due to “enemy activity”; some lurking hidden structure to the problem, corners of which emerge every time one tries to build a counterexample. The question is then whether this “enemy” is stupid enough to be outwitted by a sufficiently clever counterexample, or is powerful enough to block all such attempts. Identifying this enemy precisely is usually the key to resolving the conjecture (or transforming the conjecture into a stronger and better conjecture).

* Conjecture generalises to a broader conjecture that enjoys support of the types listed above. The twin prime conjecture, by itself, is difficult to support on its own; but when it comes with an asymptotic that one can then verify numerically to high accuracy and is a consequence of the much more powerful prime tuples conjecture (and more generally, the pseudorandomness heuristic for the primes) which is supported both because of its high falsifiability and also its nature as an optimal-looking converse (the only structure to the primes are the “obvious” structures), it becomes much more convincing. Another textbook example is the Poincare conjecture, which became much more convincing after being interpreted as a special case of geometrisation (which had a lot of support, e.g. the two-dimensional analogue, Haken manifolds, lots of falsifiable predictions, etc.).

It can be fun (though a little risky, reputation-wise) to debate how strong various pieces of evidence really are, but one soon reaches a point of diminishing returns, as often we are limited by our own ignorance, lack of imagination, or cognitive biases. But we are at least reasonably able to perform _relative_ comparisons of the strength of evidence of two conjectures in the same topic (I guess complexity theory is full of instances of this…).

Now for a pure guess, which one might classify proverbially as “generalizing from one data point”:

1. I guess that such integer-splitting polynomials p_n(X) exist for all n.
2. However, I conjecture that the constants in these polynomials cannot be built up from 0,1 by arithmetical circuits of size poly(n).

Note that the magnitude of these constants will be at least (2^n)^(2^n) = 2^{n2^n}, since we have 2^n roots of magnitude up thru 2^n. Their double-exponential magnitude does not rule out poly(n)-size circuits that use iterated squaring. However, my hunch is that the numbers P,Q,R,S… will not be near such easily-computed constants.

Well, I should say my guess 1. is the “hunch”—this could easily be a Diophantine situation where solutions stop at some finite point. If you allow that the negation of 1. implies 2. vacuously (since the polynomials and their constants wouldn’t exist), then I feel pretty strongly about 2. as a conjecture. This is based on relations to the heuristic hardness of integer factoring shown by this blog’s owner himself (R. Lipton, “Straight line complexity and integer factorization”, proc ANTS 1994, Springer LNCS 877, pp 71–79), and relations to the complexity of building up integer constants shown by Peter Bu”rgisser (STACS 2007, Springer LNCS 4393, 2007, pp 133-144).

We can add to the Terry’s nice list also Heuristic arguments. We can try to give support to conjectures (or to make conjectures) by various types of heuristic arguments. E.g., we can support conjectures in number theory by treating primes as random. I think the ability to raise, discuss and debate heuristic arguments is not that developed in math.

A general comment to the post: It is an interesting question in mathematics and in science what is it that we really want. Is it “the truth” or is it “understanding”.

With regard to truth-versus-understanding, David Deutsch’s book The Fabric of Reality argues passionately that it is “understanding”.

Deutsch’s book IMHO is well worthwhile for readers of any age, and in particular, it makes a wonderful present for nieces, nephews, and young people in general … check out it’s 100+ Amazon reviews for reasons-why.

Reflecting further upon these lines, a pretty distinguished group of mathematicians, scientists, and engineers comes to mind for whom the answer is not “the truth” and not “understanding”, but rather “enterprise”: 20th century examples include von Neumann, Wiener, von Braun, Ramo, Moore, Venter, and (more especially) greens like Jane Goodall, Ed Wilson, and James Hansen. For these scientists (to speak metaphorically) “enterprise” is the natural progeny of “truth’s” marriage to “understanding.”

With reference to Deutsch’s Fabric of Reality, we find that “enterprise” appears on exactly one page (and then merely as a passing reference to science as a “problem-solving enterprise”), while “truth” appears on 23 pages, and “understanding” on 43 pages.

This to my mind reflects a notable lacuna in Deutsch’s description of science and mathematics in general, and the science and mathematics of quantum mechanics in particular; namely, the near-total neglect of enterprise as the equal partner of truth and understanding in math and science.

Perhaps the last great mathematician to distribute his talents and energies more-or-less equally among truth, understanding, and enterprise was von Neumann, and here I can offer a personal anecdote.

Von Neumann’s biographers largely portray his many enterprises as being driven by simplistic, right-wing political beliefs. But von Neumann’s brother Michael (whom I knew when I was a graduate student) told me that his brother’s internal moral and political beliefs were immensely more nuanced and far-sighted than visible in his public persona … that von Neumann had found it necessary to project a simplified public persona, in order to effectively lead large enterprises.

Obviously, it takes immense personal confidence—which von Neumann possessed in abundance—to willfully project a simplistic public persona; we are led to wonder whether this is why not too many mathematicians have succeeded in leading global-scale enterprises.

Interestingly, one mathematician who did take interest in these issues was Vladimir Arnold, who published a brief scholarly essay on the attribution of the Epigraph to Puskin’s Eugene Onegin; that epigraph reads: “He was vanity itself; yet he possessed in even greater measure the sort of pride that acknowledges deeds both good and bad with the same indifference—a pride that springs from a consciousness, perhaps mistaken, of superiority.”

So perhaps some measure of the prideful confidence that Pushkin describes, is necessary to the successful begetting of enterprise, from the union of truth and understanding, that math and science provide to us?

What a fun column! Especially because this column inspires us to look to the future, by asking ourselves: “What present-day widely-held conjectures are likely to be refuted in the future?” To help us, Mac Lane provides a heuristic: mathematics advances “by the treatment of many specific problems”. Thus we are led to ask: “What specific problems are presently being solved, by methods and/or yielding results that confound widely-held conjectures?”

Dick Lipton’s blog posts provide a good illustration of the usefulness of this heuristic, in their willingness to entertain the hypothesis that P is equal to NP. The technical professions all routinely attack NP-complete and NP-hard problems,with very good success: mathematicians by discovering theorems; scientists by discovering physical laws, engineers by certifying designs … all of these problems being (formally) NP-hard. So it’s no wonder that Dick (and many other researchers, including me) feel that key aspects of P versus NP problem are presently not understood—otherwise how could we be solving these hard problems so routinely?

We can ask, what other conjecture-refuting hard problems are routinely solved in specific practical cases? The study of dynamical systems provides many examples. With regard to the much-studied problem of the stability of the solar system in particular, and Hamiltonian dynamics in general, the pendulum of opinion has swung back-and-forth for more than 300 years … to the great benefit of mathematics, science, and engineering!

Essential to this progress in (classical) dynamical understanding was the abandonment by mathematicians (beginning the Gauss and Riemann in the 19th century) of Euclid’s Fifth Postulate for dynamical state-spaces. Of course, ocean-sailing navigators and land surveyors had practically abandoned this postulate long before its abandonment was formalized by the mathematicians of Gauss’ and Riemann’s generation. And the fruits of this 16th century practical abandonment were wonderful; for example, immensely strong mathematical toolsets like Kolmogorov-Arnold-Moser (KAM) theory.

Nowadays this same pattern of postulate-rejection is being repeated, not for the real-valued state-space of classical mechanics, but for the complex-valued state-space of quantum mechanics. Mainly for practical reasons, quantum simulationists have already largely abandoned the quantum analogue of Euclid’s Fifth Postulate, which is to say, the global linearity of quantum state-space, for the pragmatic reason that this linear structure does not survive the pullback of quantum dynamics onto the product state-spaces that modern quantum simulation codes ubiquitously employ.

This situation exactly parallels the non-appearance of Euclid’s Fifth Postulate in Bowditch’s 1802 classic The New American Practical Navigator (a book of practical navigation, that is, on the dynamical paths of sailing shops on the non-Euclidean state-space of Earth’s oceans). These non-Euclidean mathematical tools proved to be so immensely valuable, from a purely practical point of view, that two hundred years later, Bowditch’s classic is still carried on-board every commissioned U.S. Naval vessel.

Now in the 21st century, we are faced with modern-day dynamical questions that exactly parallel those faced by the 19th century dynamicists like Gauss and Riemann (… and Lie, Killing, Cartan, Weyl, … and Mac Lane, Kolmogorov, Moser, Arnold, Smale, …). The first dynamical question is mathematical: Is it logically consistent to hypothesize that the (classical/quantum) dynamical state-spaces have a (non-Euclidean/non-Hilbert) geometry?. And if the answer to this mathematical question is “yes”—as already is strongly suggested by Mac Lane’s heuristic “mathematics advances by solution of specific problems”—then the second dynamical question is physical: Experimentally, does the (classical/quantum) state-space of Nature have a (Euclidean/Hilbert) geometry?

There is no need, therefore, for today’s mathematicians, scientists, and engineers to feel like Miniver Cheevy that they are “born too late”; to “love the days of old”; or to “curse the commonplace”.

Our fate is the opposite of Miniver Cheevy’s; we are at the beginning of a 21st century that will see greater advances in our understanding of dynamical mathematics, science, and engineering, than any previous century has seen. And our increasingly crowded, increasingly hot, increasingly resource-short planet urgently needs those dynamical advances.

The Guess: David Hilbert also believed there might be a proof that arithmetic is consistent. He proved partial results, and it seemed that the problem might be resolved in the near future. This is his second problem, again from his famous list.

The Truth: Gödel’s results, especially his Second Incompleteness Theorem showed that this was impossible. Consistency is unprovable in arithmetic unless it is actually inconsistent.

Pedantry aside, the “truth” is a somewhat more complex.

Hilbert sought a ‘finitary’ proof showing that arithmetic is consistent. Goedel did not show that that it is impossible to give such a proof. In fact he emphasised in his 1931 paper that:

It should be expressly noted that Theorem XI (and the corresponding results about M and A) in no way contradicts Hilbert’s formalistic standpoint. For the latter presupposes only the existence of a consistency proof carried out by finitary methods, and it is conceivable that there might be finitary proofs which cannot be represented in P (or in M or A).

The distinction may be significant from a computational perspective.

As part of his program for giving mathematical reasoning a finitary foundation, Hilbert proposed an omega-Rule as a finitary means of extending a Peano Arithmetic to a possible completion (i.e. to logically showing that, given any arithmetical proposition, either the proposition, or its negation, is formally provable from the axioms and rules of inference of the extended Arithmetic). In a contemporary context, this can be expressed as:

Hilbert’s omega-Rule: If it is proved that a formula [F(x)] of the first-order Peano Arithmetic PA interprets under the standard interpretation of PA as an arithmetical relation F(x) that is true for any given natural number n, then the PA formula [(Ax) F(x)] can be admitted as an initial formula (axiom) in PA.

Goedel’s 1931 paper on formally undecidable arithmetical propositions can, not unreasonably, be seen as the outcome of a presumed attempt to validate Hilbert’s omega-rule.

Since Brouwer had raised persisting doubts about whether the standard interpretation of arithmetic can be assumed to be logically sound – on the grounds that any such assumption implies Aristotle’s particularisation holds over the natural numbers – Goedel’s argument avoids any considerations that involve appealing to the truth of [F(x)] under the standard interpretation of PA.

Instead Goedel introduced the formal concept of omega-consistency for arithmetic:

Omega-consistency: PA is omega-consistent if, and only if, there is no PA formula [F(x)] such that [~(Ax) F(x)] is PA-provable and also that, for any PA-numeral [n], [F(n)] is PA-provable.

Goedel then constructively defined a PA formula [R(x)] and showed in his First Incompleteness Theorem that, if PA is assumed omega-consistent, then both [(Ax)R(x)] and [~(Ax)R(x)] are not PA-provable.

Assuming that Aristotle’s particularisation holds over N, Goedel then defined a number-theoretic relation Wid(PA) which holds in N if, and only if, PA is consistent.

Goedel showed in his Second Incompleteness Theorem that, if PA is assumed consistent, and we assume that some PA formula [W] expresses Wid(PA) in PA, then [R(x)] is PA-provable if [W] is PA-provable.

Ergo, if PA is omega-consistent, then we cannot express the assertion ‘PA is consistent’ in PA by a PA-provable formula.

Now the points to note are that:

1. PA is omega-consistent if, and only if, Aristotle’s particularisation holds over the domain N of the natural numbers.

2. If the standard interpretation of PA is logically sound, then Aristotle’s particularisation holds over N.

3. Aristotle’s particularisation over N can be expressed in contemporary terms as:

From an assertion such as:

‘It is not the case that, for any given x, any witness Witness_N of N can decide that P(x) does not hold in N’,

usually denoted symbolically by ‘~(Ax)~P(x)’, we may always validly infer that:

‘There exists an unspecified x such that any witness Witness_N of N can decide that P(x) holds in N’,

usually denoted symbolically by ‘(Ex)P(x)’.

4. The validity of Brouwer’s objection follows since Aristotle’s particularisation does not hold over N if we take the witness Witness_N as a Turing machine, and P(x) is a Halting-type of number-theoretic relation.

It follows that PA is not omega-consistent, and we cannot conclude from Goedel’s Incompleteness Theorems that Goedel’s [~(Ax)R(x)] is unprovable in PA.

The significance of the above is that issues of computational complexity – which reasonably should include those raised by Goedel in his letter to von Neumann – are finitary concerns involving number-theoretic functions and relations containing quantification over N that lie naturally within the domains of:

(a) First-order Peano Arithmetic PA, which attempts to capture in a formal language the objective essence of how a human intelligence intuitively reasons about number-theoretic predicates, and;

(b) Computability Theory, which attempts to capture in a formal language the objective essence of how a human intelligence intuitively computes number-theoretic functions.

Moreover, since Goedel had also shown in Theorem VII of his 1931 paper that every recursive relation can be expressed arithmetically, his formulation of the computational complexity of a number-theoretic problem in terms of formal arithmetical provability suggests that we ought to persist in seeking, conversely, an algorithmic interpretation of first-order PA in Computability Theory, so that any number-theoretic problem can be expressed – and addressed – formally in PA, and its solution, if any, interpreted algorithmically in Computability Theory.

Now, if [A(x1, x2, …, xn)] is an atomic formula of PA then, for any given sequence of numerals [b1, b2, …, bn], the PA formula [A(b1, b2, …, bn)] is an atomic formula of the form [c=d], where [c] and [d] are atomic PA formulas that denote PA numerals. Since [c] and [d] are recursively defined formulas in the language of PA, it follows from a standard result that, if PA is consistent, then [c=d] is algorithmically computable as either true or false in N.

In other words, if PA is consistent, then [A(x1, x2, …, xn)] is algorithmically decidable over N in the sense that there is a Turing machine TM_A that, for any given sequence of numerals [b1, b2, …, bn], will accept the natural number m if, and only if, m is the Goedel number of the PA formula [A(b1, b2, …, bn)], and halt with output 0 if [A(b1, b2, …, bn)] interprets as true in N; and halt with output 1 if [A(b1, b2, …, bn)] interprets as false in N.

Moreover, since Tarski has shown that the satisfaction and truth of the compound formulas of PA (i.e., the formulas involving the logical connectives and the quantifiers) under an interpretation of PA is definable inductively in terms of only the satisfaction (non-satisfaction) of the atomic formulas of PA, it follows that the satisfaction and/or truth of the formulas of PA under the usual interpretation of the PA symbols is algorithmically decidable.

So, if Aristotle’s particularisation does not hold over N, and the standard interpretation of PA is not logically sound, then – instead of considering Hilbert’s omega-rule – the question that one ought to consider is whether there is a sound algorithmic interpretation of PA such that:

Algorithmic omega-Rule: If it is proved that the PA formula [F(x)] interprets as an arithmetical relation F(x) that is algorithmically decidable as true for any given natural number n, then the PA formula [(Ax)F(x)] can be admitted as an initial formula (axiom) in PA.

This question is far more amenable to finitary reasoning, and leads to some far-reaching consequences that I am considering in this>/a> investigation.

I want to thank Dick for this wonderful topic “Guessing the Truth”, which inspired more entries in my BibTeX database than any other single post, on any math-science-engineering blog. Thank you, Dick!

I hope folks don’t mind if I list the BibTeX entries that “Guessing the Truth” inspired. That list begins with Richard Holmes’ The Age of Wonder: How the Romantic Generation Discovered the Beauty and Terror of Science. The subsequent references develop the thesis that our 21st century is already a similar age of wonder, beauty, and terror.

To flesh-out the historical theme of wonder, beauty, and terror, we need only read Patrick O’Brian’s (non-fiction) Joseph Banks: a Life and his (fictional) Aubrey/Maturin series (which can be read as an extended fictional treatment, in 19 volumes, of the Age of Wonder).

For the technical and mathematical details of the 18th–19th century Age of Wonder—in accord with the Ferengi Principal “How does that work, exactly?“—we refer to Nathaniel Bowditch’s 1826 edition of The New American Practical Navigator: Being an Epitome of Navigation.

For an explicit link to Dick’s essay—in particular its reference to Euclid’s Fifth Postulate—we turn to Carl Gauss’ 1827 Disquisition Generales Circa Superficies Curvas with its celebrated Theorema Egregium. It was Gauss’ article that, together with the work of Riemann, launched the modern era of differential geometry.

Where did Gauss’ and Riemann’s ideas come from? In the modern edition of Bowditch we read “Bowditch vowed while writing this (1802) edition to ‘put down in the book nothing I can’t teach the crew,’ and it is said that every member of his crew including the cook could take a lunar observation and plot the ship’s position.”

Does this mean that the common sailors of Bowditch’s era had achieved a sophisticated understanding of nonEuclidan geometry (meaning, comparable to our modern understanding) well in advance of the work of Gauss and Riemann?

The surprising answer is “yes”. In the 1826 edition of Bowdith we read the following account—which is startlingly modern—of what today would be called tangent spaces and geodesic equations: “In sailing on .. any course … the surface of the globe may be supposed to be divided into an indefinite number of small surfaces, as square miles, furlongs, yards, &c. which on account of their smallness, in comparision with the whole surface of the earch, may be esteemed as plane surfaces, and the difference of latitude and departure (or meridian distance) made in sailing over each of these surfaces, may be calculated by the common rules of Plane Sailing, and by summing up the all the differences of latitude and departures made on these different planes, we shall obtain the whole difference of latitude and departure nearly”.

To make these modern geometric ideas perfectly clear to the tens of thousands of ordinary sailors who studied Bowditch’s book—and who daily applied these sophisticated mathematical ideas navigation problems that are life-or-death to sailing ships—Bowditch appended to the above mathematical passage a footnote: “The error arising from this supposition will be decreased by increasing the number of the planes, so that by increasing the number indefinitely, the error may be made less than any assignable quantity.”

Having thus established that by the end of the eighteenth century, even ordinary sailors had achieved—well in advance of Gauss and Riemann—a sophisticated mathematical understanding of non-Euclidian geometry, we are led to consider whether the seeds of radical advances in 21st century mathematics may not already be evident?

Here I will offer the opinion that our century already has textbooks that are modern Bowditch‘s, by we which we navigate the still-largely-unexplored nanotechnological, biological, and informatic seas of our modern Age of Wonder. In the classical domain, one “Bowditchian” example is Frenkel and Smit’s Understanding Molecular Simulation: from Algorithms to Applications; in the quantum domain, we have the “Bowditchian” example of Nielsen and Chuang’s Quantum Computation and Quantum Information

As for the “wonder, beauty, and terror” of our modern era, these are well-described (for example) in a 1946 letter from von Neumann to Weiner. A modern, and to my mind similarly wonderful, extension of these von Neumann/Wiener ideas (IMHO) is the ongoing research of Scott Aaronson and Alex Arkipov. And many other examples could be cited … `cuz hey … it’s an era of discovery!

Although in the 21st century we already have some of the Bowditches we need, we do not (as yet) have our Gausses and our Riemanns, which is to say, mathematicians who creatively abstract and generalize the mathematical principles that our 21st century navigators are already using. Neither has our 21st century (yet) accomplished its seminal (post-Hilbert) quantum experiments, that will (hopefully) reveal to us more of the algebraic, geometric, and informatic structure of Nature’s state-space (whatever that state-space may be).

But we are still in the dawning of the modern Age of Wonder. Which is good news for young people!

For me, Dick’s “Guessing the Truth” essay has led to the encouraging insight, that our century’s Bowditches likely will come before our Gausses and Riemanns. And because we have so many wonderful Bowditches already at-hand nowadays, surely we can expect the 21st century to produce many new Gausses and Riemanns. Which is good news for everybody!

These historical reflection thus naturally lead us to the same optimistic view as Dirac: “A Golden Era is one in which ordinary people can make extraordinary contributions.” By Dirac’s criterion, we are all of us presently living at the beginning of what will be Golden 21st Century Era of mathematics, science, and engineering.

Any “bets” on my example of a guess ventured in comments above? There’s also the issue that if one is interested in applying the identity as a heuristic to factor a given number N = pq, it may suffice to consider the constants modulo some prime r > N.

Gödel proved that an arithmetic theory T cannot prove its own consistency, but that arguably didn’t prevent solution of Hilbert’s second problem. Gentzen gave a finitary proof of the consistency of arithmetic in 1936 that is well known in proof theory. The catch is that it is not a finitary arithmetic proof. It can be phrased as a finitary proof over tree structures (basically, the parse trees of arithmetic sentences) or as an infinitary arithmetic proof (involving transfinite induction over the ordinal epsilon-nought, which is the order type of finite trees). In computer science terms we would say Gentzen’s proof used structural induction instead of arithmetic induction.

Gentzen gave a finitary proof of the consistency of arithmetic in 1936 that is well known in proof theory.

… (involving transfinite induction over the ordinal epsilon-nought, which is the order type of finite trees). In computer science terms we would say Gentzen’s proof used structural induction instead of arithmetic induction.

There is an interesting point here on the use of such ‘finitary’ arguments in Computational Theory.

1. It is the Induction axiom schema of the first-order Peano Arithmetic PA that does not allow Gentzen’s proof to be carried out in any sound interpretation of PA.

Induction Axiom Schema of PA: For any formula [F(x)] of PA:

[F(0) -> ((Ax)(F(x) -> F(x’)) -> (Ax)F(x))]

2. Reason: The language of PA has no constant that interprets in any model of PA as the set of all natural numbers, and the Induction Axiom Schema of PA does not allow us to bypass this constraint by introducing an ‘actual’ (or ‘completed’) infinity disguised as an arbitrary constant – usually denoted by c or the ‘infinity’ symbol – into either the language, or a putative model, of PA.

Proof: [x=0 V ~(Ay)~(x=y’)] is PA-provable since it is true under any sound interpretation of PA. Hence, except 0, every element in the domain of any sound interpretation of PA is a ‘successor’. Further, x can only be a ‘successor’ of a unique element in any such interpretation of PA. Since Cantor’s first limit ordinal omega is not the ‘successor’ of any ordinal in the sense required by the PA axioms, it is not in the domain of any sound interpretation of PA.

Hence Gentzen’s proof cannot be carried out in any such interpretation.

3. Now we can define the usual order relation ‘ ((Ax)(F(x) -> F(x’)) -> (Ax)F(x))]

yields the PA theorem:

(ii) [F(0) -> ((Ax)((Ay)((y F(y)) -> F(x)) -> (Ax)F(x))]

4. If we ‘weaken’ PA by replacing (i) in (2) by (ii), and introducing [<] as a primitive symbol, we get a Peano Arithmetic PA* which admits transfinite induction.

5. If we define PA** as the Arithmetic obtained by adding [c] as a primitive symbol to PA*, along with the denumerable axioms [0<c], [1<c], [2<c], …, then Gentzen's proof can be carried out IF PA** has a sound interpretation.

6. Since the PA** axioms can be interpreted without relativisation as ZF theorems, this last is the issue at the heart of the 'constructivity' debate on whether the axiom of infinity (which postulates the existence of an infinite ZF ordinal corresponding to c) allows ZF to have a sound interpretation.

7. So Gentzen's proof can be termed 'finitary' only if ZF is consistent.

8. However, I argue elsewhere that if PA has a sound algorithmic interpretation over the structure of the natural numbers, then ZF is inconsistent.

What if, in a general theory of everything kind of way, some singular human conscious had used simple Yes(I should feel guilty) or No(I do not feel guilty) answers to answer every moral question that he could remember that had ever occurred in his life. In this way he could have become completely pure. He would have truly asked himself all the questions that could be asked of him. He would have emotionally evolved back to when he was a child.

What if he then assigned an ‘It doesn’t matter as life is only made to make mistakes’ label on every decision that he had analysed.

This would not make him God or the devil, but just very still and very exhausted. Anybody can do this but just for the purpose of this experiment lets say I have done this. Which I have.

There are no fears in me and if I died today I could deal with that because who can know me better than myself? Neither God or the Devil. I am consciously judging myself on ‘their’ level.

To make this work, despite my many faults, take ME as the ONLY universal constant. In this sense I have killed God and the Devil external to myself.The only place that they could exist is if I chose to believe they are internal.

This is obviously more a matter for a shrink more than a mathematician, but that would only be the case depending on what you believed the base operating system of the universe to be. Math / Physics / morals or some other concept.

As long I agreed to do NOTHING, to never move or think a thought, humanity would have something to peg all understanding on. Each person could send a moral choice and I would simply send back only one philosophy. ‘ LIFE IS ONLY FOR MAKING MISTAKES’.

People, for the purpose of this experiment could disbelief their belief system knowing they could go back to it at any time. It would give them an opportunity to unburden themselves to someone pure. A new Pandora’s box. Once everyone had finished I would simply format my drive and always leave adequate space for each individual to send any more thoughts that they thought were impure. People would then eventually end up with clear minds and have to be judged only on their physical efforts. Either good or bad. It would get rid of a lot of maybes which would speed lives along..

If we then assume that there are a finite(or at some point in the future, given a lot of hard work, there will be a finite amount) amount of topics that can be conceived of then we can realise that there will come to a time when we, as a people, will NOT have to work again.

Once we reach that point we will only have the option of doing the things we love or doing the things we hate as society will be completely catered for in every possible scenario. People will find their true path in life which will make them infinitely more happy, forever.

In this system there is no point in accounting for time in any context.

If we consider this to be The Grand Unified Theory then we can set the parameters as we please.

This will be our common goals for humanity. As before everyone would have their say. This could be a computer database that was completely updated in any given moment when a unique individual thought of a new idea / direction that may or may not have been thought before.

All that would be required is that every person on the planet have a mobile phone or access to the web and a self organising weighting algorithm biased on an initial set of parameters that someone has set to push the operation in a random direction.

I have chosen these as I think the subjects will cross breed information(word of mouth first) at the present day optimum rate to get us to our finishing point, complete and finished mastery of all possible information.

Surely mastery of information in this way will lead us to the bottom of a fractal??? What if one end of the universes fractal was me??? What could we start to build with that concept???

As parameters, we can assume that we live only in 3 dimensions. We can place this box around The Milky Way galaxy as this gives us plenty of scope for all kinds of discoveries.

In this new system we can make time obsolete as it only making us contemplate our things that cannot be solved because up to now, no one has been willing to stand around for long enough. It has been holding us back.

All watches should be banned so that we find a natural rhythm with each other, those in our immediate area and then further afield.

An old clock watch in this system is should only be used when you think of an event that you need to go to. It is a compass, a modern day direction of thought.

A digital clock can be adapted to let you know where you are in relation to this new milky way boxed system.(x,y,z).

With these two types of clocks used in combination we can safely start taking small steps around the box by using the digital to see exactly where you are and then using the old watch to direct you as your thoughts come to you.

We can even build as many assign atomic clocks as are needed to individually track stars. Each and every star in the Milky Way galaxy.

I supposed a good idea would be to assume that I was inside the centre of the super-massive black hole at the centre of the galaxy. That would stop us from being so Earth centric.

We could even go as far as to say that we are each an individual star and that we project all our energies onto the super-massive black hole at the centre of the galaxy.

You can assume that I have stabilized the black hole into a non rotating perfect cube. 6 outputs /f aces in which we all see ‘the universe and earth, friends’ etc. This acts like a block hole mirror finish. Once you look it is very hard to look away from.

The 6 face cube should make the maths easier to run as you could construct the inner of the black hole with solid beams of condensed light(1 unit long) arranged into right angled triangles with set squares in for strength.

Some people would naturally say that if the universe is essentially unknowable as the more things we ‘discover’ the more things there are to combine with and therefore talk about. This is extremely fractal in both directions. There can be no true answers because there is no grounding point. Nothing for the people who are interested, to start building there own personal concepts, theories and designs on.

Is it possible that with just one man becoming conscious of a highly improbable possibility that all of universes fractals might collapse into one wave function that would answer all of humanities questions in a day? Could this be possible?

Answers to why we are here? What the point of life really is et al?

Is it possible that the insignificant possibility could become an escalating one that would at some point reach 100%???

Could it be at that point that the math would figure itself out so naturally that we would barely need to think about it. We would instantly understand Quantum theory and all.