Harvey Friedman asked me for some clarifications about my recent post, and
I thought I should post them in case they worried other people too. Most
of it is pretty technical, but it gets more interesting after the **.
The measure of simplicity of formulas that I had in mind was something
like this: phi is simpler than psi if phi has fewer schematic letters
than psi, or if they have the same number of letters but phi has fewer
symbols than psi. For "has fewer symbols than" one might also substitute
"has fewer quantifiers than" or "is lexicographically before".
This does have the consequence that some formulas are more complicated
than infinitely many other formulas. But it seems to me that a formula
with two schematic letters is (at least in some relevant sense) more
complicated than any formula with only one schematic letter. Note that
for almost all ways of substituting scheme-less formulas into the schemes,
the formula with two schematic letters will end up having more symbols
than the formula with only one.
On the other hand, for some purposes (e.g. for building a theory up from
Q or from EFA), it would be good to have a measure of simplicity in which
the set of formulas simpler than a given formula was always finite. For
this, I might suggest counting a schematic letter as worth n ordinary
symbols, for some fixed value of n. One might take n = 1000 (e.g. beyond
the limit of human readability).
A more principled choice of n would be the length in symbols of the
smallest schemeless formula A(x) for which {x: A(x)} is not computable
(or, more precisely: the smallest schemeless formula A(x) in the language
(0, s, plus, times) for which we currently have a proof in ZFC that {x:
A(x)} is not computable by primitive recursive means). If n is at least
50 or so, then the VDWT scheme will turn out simpler than the standard
induction scheme (with four schematic letters) or least number scheme
(with only three).
** So far as I know, all the formulas A(x) for which we currently know
that {x: A(x)} is not computable involve coding arbitrary finite sequences
of integers by integers. (This is certainly true of A(x) = "the xth
Turing machine halts" or "x Godel-codes a provable theorem".) As a result
they are all rather long, and not (too my taste) very number-theoretic.
I think it would be very interesting to find an elegant formula A(x),
number-theoretic at least in the sense of not involving the above coding,
for which we could prove that {x: A(x)} is not computable. The best
suggestion I've seen was Barry Mazur's outlandish (though surprisingly
well-justified) one that "x is the sum of three cubes of integers" might
work. (Note that these integers are allowed to be either positive or
negative, and that such a simple question as whether thirty is the sum of
three cubes is still unknown after much computer work.)
--Matt Frank