Ultrafinitism is (I believe) a philosophy of mathematics that is not only constructive, but does not admit the existence of arbitrarily large natural numbers. According to wikipedia, it has been primarily studied by Alexander Esenin-Volpin. On his opinions page, Doron Zeilberger has often expressed similar opinions.

Wikipedia also says that Troelstra said in 1988 that there were no satisfactory foundations for ultrafinitism. Is this still true? Even if so, are there any aspects of ultrafinitism that you can get your hands on coming from a purely classical perspective?

Edit: Neel Krishnaswami in his answer gave a link to a paper by Vladimir Sazonov (non-Springer link) that seems to go a ways towards giving a formal foundation to ultrafinitism.

First, Sazonov references a result of Parikh's which says that Peano Arithmetic can be consistently extended with a set variable $F$ and axioms $0\in F$, $1\in F$, $F$ is closed under $+$ and $\times$, and $N\notin F$, where $N$ is an exponential tower of $2^{1000}$ twos.

Then, he gives his own theory, wherein there is no cut rule and proofs that are too long are disallowed, and shows that the axiom $\forall x\ \log \log x < 10$ is consistent.

Winking emoticons aside, it's probably good to have people like Nelson and Zeilberger around, to test received opinions and keep people on their toes.
–
Todd Trimble♦Oct 30 '10 at 12:53

25

I hope everyone has heard the story told by Harvey Friedman: "I have seen some ultrafinitists go so far as to challenge the existence of 2^100 as a natural number, in the sense of there being a series of 'points' of that length. There is the obvious 'draw the line' objection, asking where in 2^1, 2^2, 2^3, … , 2^100 do we stop having 'Platonistic reality'? Here this … is totally innocent, in that it can be easily be replaced by 100 items (names) separated by commas. I raised just this objection with the (extreme) ultrafinitist Yesenin-Volpin during a lecture of his." (cont...)
–
Todd Trimble♦Oct 30 '10 at 22:37

42

"He asked me to be more specific. I then proceeded to start with 2^1 and asked him whether this is 'real' or something to that effect. He virtually immediately said yes. Then I asked about 2^2, and he again said yes, but with a perceptible delay. Then 2^3, and yes, but with more delay. This continued for a couple of more times, till it was obvious how he was handling this objection. Sure, he was prepared to always answer yes, but he was going to take 2^100 times as long to answer yes to 2^100 then he would to answering 2^1. There is no way that I could get very far with this."
–
Todd Trimble♦Oct 30 '10 at 22:38

3

I thought astronomers have seen quasars (whatever those are) with their telescopes whose distance from here is about 2^100 millimeters.
–
Zsbán AmbrusNov 1 '10 at 10:41

8 Answers
8

Wikipedia also says that Troelstra said in 1988 that there were no satisfactory foundations for ultrafinitism. Is this still true? Even if so, are there any aspects of ultrafinitism that you can get your hands on coming from a purely classical perspective?

There are no foundations for ultrafinitism as satisfactory for it as (say) intuitionistic logic is for constructivism. The reason is that the question of what logic is appropriate for ultrafinitism is still an open one, for not one but several different reasons.

First, from a traditional perspective -- whether classical or intuitionistic -- classical logic is the appropriate logic for finite collections (but not K-finite). The idea is that a finite collection is surveyable: we can enumerate and look at each element of any finite collection in finite time. (For example, the elementary topos of finite sets is Boolean.) However, this is not faithful to the ultra-intuitionist idea that a sufficiently large set is impractical to survey.

So it shouldn't be surprising that more-or-less ultrafinitist logics arise from complexity theory, which identifies "practical" with "polynomial time". I know two strands of work on this. The first is Buss's work on $S^1_2$, which is a weakening of Peano arithmetic with a weaker induction principle:

Then any proof of a forall-exists statement has to be realized by a polynomial time computable function. There is a line of work on bounded set theories, which I am not very familiar with, based on Buss's logic.

The second is a descendant of Bellantoni and Cook's work on programming languages for polynomial time, and Girard's work on linear logic. The Curry-Howard correspondence takes functional languages, and maps them to logical systems, with types going to propositions, terms going to proofs, and evaluation going to proof normalization. So the complexity of a functional program corresponds in some sense to the practicality of cut-elimination for a logic.

IIRC, Girard subsequently showed that for a suitable version of affine logic, cut-elimination can be shown to take polynomial time. Similarly, you can build set theories on top of affine logic. For example, Kazushige Terui has since described a set theory, Light Affine Set Theory, whose ambient logic is linear logic, and in which the provably total functions are exactly the polytime functions. (Note that this means that for Peano numerals, multiplication is total but exponentiation is not --- so Peano and binary numerals are not isomorphic!)

The reason these proof-theoretic questions arise, is that part of the reason that the ultra-intuitionist conception of the numerals makes sense, is precisely because they deny large proofs. If you deny that large integers exist, then a proof that they exist, which is larger than the biggest number you accept, doesn't count! I enjoyed Vladimir Sazonov's paper "On Feasible Numbers", which explicitly studies the connection.

I should add that I am not a specialist in this area, and what I've written is just the fruits of my interest in the subject -- I have almost certainly overlooked important work, for which I apologize.

Taking cut away is really brutal, though -- it's mathematics with no lemmas allowed! This is where linear logic shines: it lets you bound the size of the expansion from the abbreviation power of lemmas.
–
Neel KrishnaswamiOct 30 '10 at 18:21

I don't understand the "weaker induction principle" you mention. It seems that you can use it to prove that for all x: x=0.
–
Sune JakobsenOct 31 '10 at 8:51

1

@Sune: This is a flooring division, so $x/2$ gives the largest integer less than or equal to $\frac{x}{2}$ For example, $3/2 = 1$. In particular, this means that in general we only have $2 \times x/2 \leq x$.
–
Neel KrishnaswamiOct 31 '10 at 10:12

4

A clarification: Buss’s polynomial induction schema is equivalent to usual induction. It only gets weak when you severely restrict the class of formulas allowed in the schema, and this works for usual induction too. When the schemata are restricted to a class of formulas not closed under bounded quantification, the two schemata are no longer equivalent, hence both are used to axiomatize different theories of bounded arithmetic (such as $S^1_2$). However, what makes these theories weak is the restriction on formula complexity, not the form of the induction schema.
–
Emil JeřábekJun 7 '11 at 14:51

I've been interested in this question for some time. I haven't put any serious thought into it, so all I can offer is a further question rather than an answer. (I'm interested in the answers that have already been given though.) My question is this. Is there a system of logic that will allow us to prove only statements that have physical meaning? I don't have a formal definition of "physically meaningful" so instead let me try to illustrate what I mean by an example or two.

Consider first the statement that the square root of 2 is irrational. What would be its physical meaning? A naive suggestion would be that if you drew an enormous grid of squares of side length one centimetre and then measured the distance between (0,0) and (n,n) for some n, then the result would never be an integer number of centimetres. But this isn't physically meaningful according to my nonexistent definition because you can't measure to infinite accuracy. However, the more finitistic statement that the square root of 2 can't be well approximated by irrationals has at least some meaning: it tells us that if n isn't too large then there will be an appreciable difference between the distance from (0,0) to (n,n) and the nearest integer.

As a second example, take the statement that the sum of the first n positive integers is n(n+1)/2. If n is too huge, then there is no hope of arranging a huge triangular array and counting how many points are in it. So one can't check this result experimentally once n is above a certain threshold (though there might be ingenious ways of checking it that are better than the obvious method). This shows that we can't apply unconstrained induction, but there could be a principle that said something like, "If you keep on going for as long as is practical, then the result will always hold."

One attitude one might take is that this would be to normal classical mathematics as the use of epsilons and deltas is to the mathematics of infinities and infinitesimals. One could try to argue that statements that appear to be about arbitrarily large integers or arbitrarily small real numbers (or indeed any real numbers to an arbitrary accuracy) are really idealizations that are a convenient way of talking about very large integers, very small real numbers and very accurate measurements.

If I try to develop this kind of idea I rapidly run into difficulties. For example, what is the status of the argument that proves that the sum of the first n integers is what it is because you can pair them off in a nice way? In general, if we have a classical proof that something will be the case for every n, what do we gain from saying (in some other system) that the conclusion of the proof holds only for every "feasible" n? Why not just say that the classical result is valid, and that this implies that all its "feasible manifestations" are valid?

Rather than continue with these amateur thoughts, I'd just like to ask whether similar ideas are out there in a better form. Incidentally, I'm not too fond of Zeilberger's proposal because he has a very arbitrary cutoff for the highest integer -- I'd prefer something that gets fuzzier as you get larger.

Edit: on looking at the Sazonov paper, I see that many of these thoughts are in the introduction, so that is probably a pretty good answer to my question. I'll see whether I find what he does satisfactory.

In this system of logic, should the Poincare recurrence theorem be provable? In some way it contradicts the second law of thermodynamics...
–
Łukasz GrabowskiNov 4 '10 at 18:16

1

If I am not making a mistake, the mathematical philosophy you are referring to is called actualism. I also think finite model theory is relevant here, they use rational languages (no function symbols like + and .) to avoid forcing the models to be infinite.
–
KavehNov 9 '10 at 23:51

Here is an ultrafinitist manifesto (link) I have co-written a few years ago:

http://arxiv.org/pdf/cs/0611100v1

It is absolutely not a finished paper (there are a few inaccuracies, and many parts are sloppy), but it contains a brief history of ultrafinitistic ideas from the early greeks all the way to present time, a number of refs, as well as a sketch of a programme towards a model theory and a proof theory of ultrafinitistic mathematics.

You may also want to google the FOM list on "ultrafinitism", there are a few posts by Podnieks, Sazonov, myself, and a few others pro and contra ultrafinitism.

I've always thought that assuming a formalist position (i.e., mathematics is merely the manipulation of symbols) easily allows for an ultrafinitist position. The formalist may easily grant that $b=10^{10^{10^{10}}}$ is a formal number, in the sense that it is permissible in the grammar,

(e.g., $\log_{10}(10^{10^{10^{10}}})>10^{9^8}$ is TRUE)

without it being an ontological number (case and point: there is no string of characters which is $10^{10^{10}}$ long, the length of would-be decimal representation of $b$).

Similarly, axioms which fool us into thinking they are about infinity are happily read as finite strings, from which point they may be perfectly acceptable.

From this point of view, the entities which might otherwise be numbers, but will be rejected here, are not those which are expressible in a few characters or even a few pages of characters, but those for which no human will ever come close to expressing, calculating with, etc.

I am not advocating formalism here, but it seems to make ultrafinitism philosophically defensible. As I have put it, it also makes ultrafinitism inconsequential, except as a philosophical point.

to manipulate symbols you need to accept something like finitism. Remember that Hilbert himself was fine with finitism, the goal of his formalism was reducing higher stuff to finitism.
–
KavehSep 28 '11 at 11:20

One could take ETCS - which is a finite, first order axiomatisation of the category of sets - and remove the axiom that guarantees the existence of a natural numbers object. Then in this set up one can prove the existence of sets of finite cardinality, but not the existence of a set with cardinality $\geq \aleph_0$. Moreover one could weaken this to a finitist version of Palmgren's constructive, predicative version of ETCS, which would be a well-pointed $\Pi$-pretopos with enough projectives.

This latter version minus function sets (the '$\Pi$' in $\Pi$-pretopos) would perhaps be closer to Nelson's idea, because at one point he expresses doubts about the finiteness of $n^m := \{1,\ldots,n\}^{\{1,\ldots,m\}}$ for large $n$ and $m$. EDIT: I should say that in a formal setting this would translate to the unprovability of the statement "$n^m$ is finite", which would be the case in a model of a "finite set theory" without function sets.

Or one can just work with Nelson's arithmetic, which is the most ultrafinitist thing I know. For example, exponentiation is not a total function in his theory.

Still a minor error: you identify the "&Pi;" "&Pi;-pretopos" as power sets, but it's actually function sets (exponentials). Palmgren's theory, being (weakly) predicative, already lacks power sets (and it's possible to have function sets without power sets because of the intuitionistic logic).
–
Toby BartelsSep 9 '11 at 22:24

Hm, ok. Would it be ok if I just changed it to 'latter version minus function sets (the '$\Pi$'..'
–
David RobertsSep 9 '11 at 23:14

There is this argument against Nelson's predicative arithmetic which basically says that the assumption that exponentiation is not total, which is in some sense the whole reason to start predicative arithmetic, implies the inconsistency of the predicative arithmetic.

The question in the title suggests that there already is a formal foundation (call it $\mathcal F$) for finitism. Maybe ZFC minus Axiom of Infinity would be such a foundation.

It might then make sense to argue philosophically that $\mathcal F$ is really a formal foundation for ultra-finitism, after all.

According to Wittgenstein, the meaning of a word or phrase is encapsulated by its use. Now consider a natural number $n_0$ that is so large that no use, or even mention, of this specific number could possibly be made by humans, either directly or indirectly. Then $n_0$ does not really have any meaning. And so what ZFC-Inf really "means", is ultra-finitism.

This is not totally unreasonable, but my understanding is that we are easily able to express numbers which an ultrafinitist would view with disdain, such as $10^{10^{10}}$ (recalling that a googol exceeds our current estimates of the number of fermions in the universe).
–
Niel de BeaudrapOct 30 '10 at 13:33

2

ZFC-Infinity can prove the totality of $<\epsilon_0$-recursive functions. This goes a long way beyond even the Ackermann function, which is already very, very far from ultrafinitism. Proper ultrafinitists don't believe in factorial!
–
Daniel MehkeriNov 1 '10 at 1:17

@Niel, Daniel: Thanks for the feedback. Actually if we identify meaning with use-ability then possibly $10^{10^{10}}$ and $n!$ are much less used/useful than $10^{10}$ and $n\cdot m$. Some kind of decreasing meaningfulness is necessary to avoid Berry's paradox (let $n_0$ be the least meaningless number...)
–
Bjørn Kjos-HanssenNov 1 '10 at 2:53

1

@Bjørn: Maybe, but there are other ways out of the "first non-feasible number" problem. It does not follow from mathematical induction that all inhabited subsets of the natural numbers have a first element - that requires classical logic, so non-constructive, so presumably, a fortiori, non-ultrafinitist. Plus an ultrafinitist can also deny full mathematical induction (which is the essence of bounded arithmetic for example).
–
Daniel MehkeriNov 1 '10 at 16:53

@Daniel: that's an interesting point, too. Well, the OP wrote "are there any aspects of ultrafinitism that you can get your hands on coming from a purely classical perspective?" so I guess classical logic is allowed here?
–
Bjørn Kjos-HanssenNov 1 '10 at 17:28

With sequences one can make the usual recursive definitions of addition, multiplication, and exponentiation, and then towers of powers. It will not, of course, be able to prove any of them total.

So the ultrafinitist who has any particular idea which numbers are permissible and which are not can simply add in the axioms he wants, such as

E/ "the product of 100 and 100 exist" and

F/ "a tower of 10 powers of 2 does not exist".

IMHO, these assumptions are not of any mathematical interest, since the system without Axiom 3 is capable of proving many mathematical theorems (Quadratic Reciprocity...), and adding any axioms such as E or F only adds trivial capabilities to prove additional theorems. So it is better, mathematically at least (and IMHO philosophically), to be agnostic about the successor axiom, rather than an atheist or a theist.