We all know of the game where a card of a predefined size, say 3x5 cm, is given to every contender, and whoever writes the biggest (positive) integer on his, wins. Naive answers are easily defeated by iteration of fast-growing functions; those are defeated by induction, and these by transfinite induction. However, if a system of axioms is prefixed, then we cannot pursue this strategy forever: for example, if we are only willing to accept Peano's Axioms (PA), then $f_{\alpha}(n)$ (where $\alpha$ is an ordinal number and $f$ is defined à la Ackermann) is computable (in the sense that the axioms ensure that a program to compute it exists and will terminate in finite time) when $\alpha < \epsilon_0$, but not if $\alpha = \epsilon_0$.

This problem suggests two related questions to me:

It is known that the Ackermann function is well-defined inside AP, and that other functions which grow much faster, like the one in the strong version of the finite Ramsey Theorem of Paris-Harrington, or Goodstein's function whenever it grows fast (I think), or $f_{\epsilon_0}(9)$, cannot be defined everywhere just by application of AP, because they "grow too fast for AP". Is there a rigorous definition of what it means for a function to grow too fast for AP (or any other arithmetical axiom system)? Can we establish in any sense a "limit" for this process? For example, can we find a "threshold function" F, depending on the axioms, such that if f dominates F then f is not computable and if F dominates f then it will be? (I'm thinking about something among the lines of the convergence of the p-series for p>0 whenever p>1 and its divergence whenever p<=1).

Building in the exposition above Spencer observes that, between experts, this game is not funny and reduces to claims of legitimacy (over the validity of the axioms they are supposed to use), since if we allow just a fixed amount of characters for describing our number, and our axioms system is prefixed also, then THERE in fact IS a largest number computable on that system (and thus competitors would come to a draw). However, what happens if we consider the following metagame? Instead of fixing the axiom system beforehand, we allow every contender to (secretly) choose his own system of axioms for arithmetic, in the hope that his will allow faster-growing computable functions than those of the others. Doing this, the contender takes the risk, while trying to get more and more power from the axioms, of actually getting an inconsistent system! Whoever gets the biggest (computable) number in a consistent axiom system wins.
Is this game interesting, or is it "flawed" too? In adittion, inconsistency may be proved within the axiom system, but its consistency would have to be proven in a more powerful framework. Which one would you select and why? What about the metametagame of letting those frameworks to the election of the players? Is that still interesting?

This reminds me of de Bois-Reymond's construction of a series intermediate between converging and diverging. Hausdorff debunked this idea by exhibiting a Hausdorff gap.
–
Yuval FilmusJul 22 '10 at 4:07

10

I will write down "your number + 1 if it is well defined, otherwise 1". I'm not quite sure what happens if my opponent writes down the same thing.
–
Richard BorcherdsJul 22 '10 at 4:33

6 Answers
6

My first remark is that if you allow players to pick their own theories, but only allow consistent theories to win, then you will not be able to compute the winner of the contest. The reason is that the consistency of a theory is not in general a computable question.

To see this, suppose that we had an algorithm that could compute consistency of any given finite theory. Using it, I claim that we can solve the halting problem. Given any Turing machine program $p$ and input $n$, form the theory $T$ asserting the basic facts of arithmetic (a small fragment of PA suffices), plus the assertion that $p$ never halts on input $n$. If $p$ does halt on input $n$, then this theory is inconsistent. And if it doesn't, then the theory is consistent, being true in the standard model. So if we could check consistency, we could solve the halting problem. Since that is impossible, we cannot check consistency in a computable manner.

My second remark is that if all players work in a fixed consistent theory $T$, but play decriptions of numbers that succeed to define unique numbers provably in the theory $T$, then you still won't be able to compute the winner. For example, perhaps one player plays the definition: the smallest size proof of a contradiction in some theory $S$, if $S$ is inconsistent, otherwise $10$, which $T$ can prove uniquely describes a number, without setttling the question of Con(S). But if another player plays $12$, then you will seem to need to decide Con(S) in order to determine the winner, which is impossible by my previous remarks.

Joel Spencer asked us the three by five card question on about the first day of the class in logic. This would have been about 1977 at Stony Brook. He also gave this neat proof of Incompleteness, number all computer programs that output a single number, ask whether there is a superprogram S that decides "Does program number n output n?" then derive a contradiction because S also has a number and...I forget what happens next but it was short and convincing.
–
Will JagyJul 22 '10 at 4:20

Thank you, Joel. I did not realize quite how good a teacher Spencer was at the time. He moved to Courant after I left for graduate school, I guess he is still there. Same with Jeff Cheeger.
–
Will JagyJul 22 '10 at 5:17

3

If there were a program $S$ that determined whether the $n^{\rm th}$ program gave output $n$ on input $n$, then let $T$ be the program that on input $n$, consults $S$ to check if $n$ is a program that outputs $n$ on $n$, if so, output $n+1$, otherwise, output $n$. If $T$ is program number $t$, then $T$ gives output $t+1$ on $t$ if and only if it gives output $t$, a contradiction. This implies the incompleteness theorem, since it shows that you can't prove all instances of the "otherwise" case, for if you could then you could build such an $S$.
–
Joel David HamkinsJul 22 '10 at 11:52

1

Both of my remarks should be tempered by the observation that if you really inist on the 3x5 limitation, then there are after all only finitely many things to write, and if each has an answer, then the outcomes will in fact be computable for this reason. What my arguments show is that there is no uniform method (uniform in the size of the card) of determining the winner.
–
Joel David HamkinsJul 27 '10 at 1:25

Although one cant find a threshold function with exactly the properties you ask for,
there is often something quite close to a threshold function, given as follows:

For an integer n, consider all Turing machines of size n such that the given theory can prove they halt with a proof of size at most n. Then define f(n) to be the largest number of steps that any of these machines take before halting.

This function is computable if the theory satisfies some form of consistency, possibly omega consistency.
The theory cannot show that this is computable, and although there are smaller functions that the theory cannot show are computable, they are in some sense not much smaller. So for most practical purposes it is a threshold function.

As you suspected, you do need $\omega$-consistency here; mere consistency is insufficient, since if the theory has no $\omega$-model, then it could be that the theory proves the program halts only in nonstandard time. For example, the program that searches for a proof of a contradiction in PA is provably halting in $PA+\neg\text{Con}(PA)$, but it doesn't actually halt. But if the theory is $\omega$-consistent, then when it proves that a program halts, it really does halt.
–
Joel David HamkinsJul 26 '10 at 20:07

Building in the exposition above
Spencer observes that, between
experts, this game is not funny and
reduces to claims of legitimacy (over
the validity of the axioms they are
supposed to use), since if we allow
just a fixed amount of characters for
describing our number, and our axioms
system is prefixed also, then THERE in
fact IS a largest number computable on
that system (and thus competitors
would come to a draw).

The problem with this, as I understand it, is that once you allow enough characters, even with a finite (but sufficiently-strong) set of axioms, there's no longer a proof that the number you've described is the largest computable, as you can't prove that all the other descriptions terminate with smaller numbers (i.e., this is effectively the halting problem in disguise). You know that there's a largest number but you can never be certain that some given number is actually it.

Yes, I have knowledge of it, and I also think it is indeed lovely and a very-well written exposition! Sadly, it does not (if I remember correctly) address directly the issue I was thinking about (but I highly recommend it for introducing students to these "more-complex-than-it-seems-at-first-sight theme!)
–
Jose BroxJul 27 '10 at 10:43

"For example, can we find a "threshold function" F, depending on the axioms, such that if f dominates F then f is not computable and if F dominates f then it will be? (I'm thinking about something among the lines of the convergence of the p-series for p>0 whenever p>1 and its divergence whenever p<=1)."

This is a well studied phenomenon. The answer is no and it follows from the theory of subrecursive degrees. Typically there will be a dense structure of subrecursive degrees in the threshold region. A possible standard reference for PRA might be:
The Ackermann functions are not optimal, but by how much?
H. Simmons: Journal of Symbolic Logic 75 (1):289-313 (2010).
There is also more extensive work by Lars Kristiansen et al on the subject.
Even some analogy with forcing pops up in this context (joint work with Sy Friedman et al.)