Today, we begin a historical journey from actual infinity to potential infinity and back again. We will part ways with ultrafilters for this first leg; they will get a needed rest before making a spectacular reappearance on the return trip.

In a previous post, we discussed the use of infinitesimals by Galileo and some of his contemporaries and the controversy that they caused in the scientific and religious circles of Europe during the 17th century. At the end of the 17th century, the infinitesimal took an even larger role in mathematics as one of the central players in the calculus of Newton and Leibniz. To illustrate how infinitesimals were used, let us consider the derivative, one of the fundamental concepts of calculus.

Suppose is a function. The derivative of (which exists, provided is a sufficiently well-behaved function) measures the rate of change of the value of with respect to changes in its input. For example, suppose the variable denotes time and denotes the position of a car along a road at time . Then the derivative of at a time (denoted, in Leibniz’s notation, by ), measures the velocity of the car at time .

If is a linear function (so its graph is a straight line), then the derivative of is simply the slope of that line. In our example, this would correspond to a situation in which the car was traveling at a constant velocity. But what if is a more complicated function? In this case, a fruitful approach is to try to approximate the graph of by straight lines. (This is a common tactic in mathematics: given a complicated structure that you don’t know how to deal with, try to approximate it with simpler things that you do know how to deal with.)

In practice, this might look as follows. To find the derivative of at time , choose a time close to (but different from) , find the line connecting the points and , and find the slope of this line. Doing the algebra, this slope turns out to be:

If we rewrite as , then this becomes:

The situation is illustrated in the following picture.

The derivative of f at x is approximated by the slope of the line connecting (x, f(x)) and (x+h, f(x+h)).

The smaller becomes, the closer the slope of the line between and gets to the actual derivative of at (which is the slope of the line tangent to the curve at the point . Therefore, one might think, in order to find the true derivative of , we just have to take to be really small. How small is this? Well, it certainly needs to be smaller in magnitude than every positive real number since, as the picture above illustrates, taking to be a non-zero real number just gives an approximation (though an arbitrarily good one) to the actual derivative. However, it cannot be equal to , since, if , then

which is of course undefined. The obvious answer, to those who accept the use of such things, at least, is to let be a non-zero infinitesimal. This is essentially the approach taken by Newton and Leibniz. (I am of course simplifying things, but this is the general idea).

The formulation of calculus by Newton and Leibniz was a great success and led to many further advances in science and mathematics, but the controversy surrounding the use of infinitesimals did not go away. As we mentioned in the earlier post, indiscriminate use of infinitesimals often led to paradox. There were also attacks on ontological grounds. Opponents of the use of infinitesimals claimed that their proponents wanted things both ways: on the one hand, infinitesimals should have positive magnitude to avoid undefined expressions such as ; on the other hand, this positive magnitude should be smaller than any real positive magnitude. These two demands seemed contradictory. In addition, there is the observation that, if is a positive infinitesimal, then would be a positive infinite number, clashing with the orthodoxy of the time that actual infinity should not be present in mathematics.

These objections essentially amounted to an appeal to what came to be known as the Archimedean property as applied to the set of real numbers. Briefly, the Archimedean property states that there are no infinite or infinitesimal elements in a given structure. More precisely (and in the context of the real numbers (it is the same for any ordered field, for example)), the Archimedean property states that, if and are any two positive numbers, then there is a natural number such that . The Archimedean property appeared in Euclid’s Elements and was given its name by mathematician Otto Stolz in the 1880s. Stolz did much work on extensions of the real numbers which do not satisfy the Archimedean property. Interestingly and rather surprisingly, Cantor called this work an “abomination” and published an attempted sketch of a proof of the non-existence of infinitesimals.

An illustration of the Archimedean property, which essentially states that any positive magnitude can be covered by finitely many copies of any other. Here, 4 copies of A are sufficient to cover B.

The standard real numbers, , satisfy the Archimedean property. However, if is extended to include infinitesimals, then the Archimedean property fails. To see this, let be a positive infinitesimal, and let . Since is an infinitesimal, we must have for every natural number , so for all natural and thus and witness the failure of the Archimedean property.

One of the most prominent opponents of infinitesimals was Bishop Berkeley, who, in 1734, published The Analyst, an attack on the foundations of infinitesimal calculus with the wonderful subtitle, A DISCOURSE Addressed to an Infidel MATHEMATICIAN. WHEREIN It is examined whether the Object, Principles, and Inferences of the modern Analysis are more distinctly conceived, or more evidently deduced, than Religious Mysteries and Points of Faith. In the following memorable passage, Berkeley calls into question the nature and existence of infinitesimals (in what follows, ‘fluxions’ refers to a definition of Newton closely related to the derivative):

And what are these Fluxions? The Velocities of evanescent Increments? And what are these same evanescent Increments? They are neither finite Quantities nor Quantities infinitely small, nor yet nothing. May we not call them the ghosts of departed quantities?

Bishop Berkeley

Newton, Leibniz, and like-minded mathematicians of course defended their use of infinitesimals. Leibniz explicitly justified their use in his Law of Continuity, expressed in a manuscript in 1701:

In any supposed continuous transition, ending in any terminus, it is permissible to institute a general reasoning, in which the final terminus may also be included.

And more plainly in a letter from 1702:

The rules of the finite are found to succeed in the infinite.

The debate over infinitesimals was undoubtedly a good thing for mathematics, as it inspired mathematicians to work to put calculus on rigorous foundations. This was largely done in the 19th century, as the actual infinities of infinitesimals were replaced by the potential infinities of limits, and the definition of the derivative settled into the form familiar to students of calculus ever since:

In calculating such a limit, one considers the behavior of as becomes arbitrarily close, but not equal, to . Importantly, these values of are all standard real numbers and not infinitesimal. Infinitesimals themselves came to be seen as something like a useful but foundationally suspect fiction, a heuristic that helped lead to the great achievements of calculus but which had properly been discarded upon the rigorous founding of calculus in the language of limits.

This would be a natural ending point for our story. Indeed, this is how the story went for about 100 years, and calculus as taught in most schools today is hardly changed from its 19th century formulation. However, surprises awaited in the mid-20th century, when the use of infinitesimals in calculus was finally put on a rigorous foundation and Newton, Leibniz, and other proponents were, in a sense, vindicated. We’ll have that story in our next installment.