By Sebastian Hayes

If a certain quantity ‘has a limit’, one thing that is certain (if the statement is true) is that this limit cannot be exceeded: all the different definitions of ‘limit’ concur on this. However, the question of whether an increasing or decreasing quantity actually attains this limit (supposing it has one) is another matter altogether. In real life the question tends to be academic; the final destination (‘limit’) of an air trip to Paris is not, as it happens, Paris itself, since the Charles de Gaulle Airport is strictly speaking outside the city limits. On the other hand, the City airport of London is well and truly in London. But who cares about such finicky issues?

However, in mathematics the question of limits arose quite early on and the controversy is still going on today. The fundamental concept of Greek mathematics was ratio which originally had a clearcut arithmetic meaning. If two quantities A and B were in the ratio of 4 to 7, this meant that there was some common unit which duplicated four times gave us A and duplicated seven times gave us B. The unit was obvious enough if we were comparing quantities of eggs but it was assumed that quantities of flour or even water could be compared in this way even though the ‘unit’ was not immediately obvious and might need to be defined e.g. in terms of cupfulls or basinfulls. Note, however, that the quantities being compared were of the same kind ─ it was not obvious that unlike quantities could be meaningfully compared in such a way.
Originally Greek mathematics seems to have been arithmetic rather than geometrical ─ although Pythagoras is remembered today for the geometric theorem named after him, his greatest achievement at the time was probably his intuition that sounds could be compared numerically, hence the terms ‘fifths’ and ‘fourths’ we still use today. But very soon geometry turned up a seemingly insurmountable problem. Clearly, the diagonal of a unit square was ‘of the same kind’ as the side ─ both were line segments composed of atoms ─ but apparently the diagonal did not share a common unit with the side ! Today, we say that the diagonal was an ‘irrational’ number, namely Ö2, but the Greeks didn’t put it like this: they said the diagonal and the side were ‘incommensurable’, i.e. lacked a common unit of measurement.

Once the geometry of circles and triangles took off, there was the serious question of the relation of an arc of a circle to the radius. Curved lines were seemingly ‘incommensurable’ with straight lines, but one could hardly do much geometry, let alone engineering, without comparing the two. A way of proceeding was found by extending the meaning of ratio to figures of slightly different types.

“The question, ‘What is the area of a circle?’ would have had no meaning to the Greek geometers. But the query, ‘What is the ratio of the arcs of two circles?’ would have been a legitimate one, and the answer would have been expressed geometrically: ‘the same as that of squares constructed on the diameters of the circles.’ “ Boyer, The History of the Calculus p. 32

This way of proceeding was reasonable enough but it meant that Greek mathematics in its official form (Euclid, Archimedes &c.), tended to be extremely long-winded. Archimedes does not actually give us the formula for the volume of a sphere, he simply states that “judging from the fact that any circle is equal [in area] to a triangle with base equal to the circumference and height equal to the radius of the circle, I apprehended that, in like manner, any sphere is equal to a cone with base equal to the surface of the sphere and height equal to the radius”. Note the phrase ‘in like manner’.
Because of what seems to modern authors squeamishness concerning irrationals, the Greek method of exhaustion, though very similar in approach to the Integral Calculus, is not identical with it. Archimedes and others used the method of inscribed and circumscribed polygons to pin down the area of a circle and thus get out a numerical value for p. But the Greeks did not view the area of the circle as the ‘limit’ of this process because this would imply that the areas of two dissimilar figures, circle and polygon, could ‘ultimately’ be the same.

Newton found himself up against exactly the same problem when he came to write the Principia but, since his main concern was working out the orbits of heavenly bodies (rather than just doing fancy pure mathematics) he needed a more decisive approach than was offered by Greek mathematics. Immediately after the Axioms, Newton has a Section entirely devoted to the question of limits and he kicks off with lemma I

“Quantities, and the ratios of quantities, which in any finite time converge continually to equality, and before the end of that time approach nearer to each other than by any given difference, become ultimately equal.”

Such a statement would have horrified a Greek or indeed a modern mathematician but Newton needs it, or believes he does, in order to get out various results concerning orbits.
The modern approach to limits is almost the exact opposite of Newton’s though also quite different from the Greek attitude. Modern analysis typically concerns itself with ‘infinite’ sequences which converge to a limit (or fail to do so), while neatly avoiding the vexed question of whether the sequence actually attains this limit. In this way it is possible to deal with functions that are not even defined at a particular point but which nonetheless ‘have a limit’ as they approach this point. For example the function 1/(x – 3) is not defined at the point x = 3 because division by zero is not allowed. Nonetheless, such a function does have a limiting value (namely zero) as x → 3 either from below or above just so long as it is not exactly 3. Similarly, the hyperbola y = 1/x, although to all intents and purposes it is zero for very large x, does not actually attain the limit zero.
But how do we know that zero is the limit for 1/x? Because of the way a limit is defined in modern mathematics. The precise d, e definition is rather finicky but the basic idea is that of a challenge between two persons. Person A claims that such and such a function has a limit l. Person B says, “In that case, you must give me an x (or other independent variable) such that f(x) for your value of x is closer to the limit l than any non-zero quantity. Moreover, you must show that all subsequent x’s will also have fall within this margin.”

In some cases, it is very easy to pick up the challenge. For example, if I claim that the limit to 1/x as x increases without bound is zero, my challenger will say, “I want a margin of error smaller than 1/104.” That is easy enough since I only have to choose a number > 104 and 1/x and all subsequent values will differ from zero by less than 1/104. Moreover, whatever number my challenger gives me, I can produce an x that lies inside the margin. Therefore I win. But it is often not at all obvious whether certain algebraic expressions do have limits or not, and even when there are good reasons to believe they do have limits, we are unable to say exactly what this limit is.

Newton is inconsistent or, if you like, opportunistic in his use of limits. The trouble with Calculus is, of course, that it uses limits either implicitly or explicitly all the time: the derivative is itself a limit since it is the ratio of

The limiting value of the R.H.S. is obviously 2x since we can make it as close to 2x as required by simply diminishing dx. It is tempting to simply set dx at zero and have done with it but this gets us into trouble on the L.H.S. since one is not allowed to divide by zero. What one wants to do is simultaneously to let dx go to zero on one side but not on the other. But this is hardly consistent.

Newton, unlike Leibnitz, is aware of the logical difficulty but never quite manages to dispose of it satisfactorily. On one and the same page he says “There is a limit which the velocity at the end of the motion may attain, but not exceed” and a little further on he speaks of “limits to which the ratios of quantities decreasing without limit do converge…..but never go beyond, nor in effect attain to till the quantities are diminished ad infinitum….” In effect, as Bishop Berkeley pointed out in Newton’s own time, the idea that a body has an ‘instantaneous velocity’ when at a particular point is nonsensical since when it is actually at such a point it is, by definition at rest. Modern Calculus, while freeing the subject from glaring inconsistency, has also succeeded in removing it from the domain of physical reality which gave rise to it in the first place.
Newton, who was a natural philosopher first and a pure mathematician second, would most likely have regarded the modern analytic definition of a limit, originally due to Weierstrass, as a fudge. The key proposition concerning limits in the Principia is lemma I in Book I Section I :

“Quantities, and the ratio of quantities, which in any finite time converge continually to equality, and before the end of that time approach nearer to each other than by any given finite difference, become ultimately equal”.
Newton’s ‘proof’ is brief and to the point:“If you deny it, suppose them to be ultimately unequal, and let D be their ultimate difference. Therefore, they cannot approach nearer to equality than by that given difference D; which is contrary to the supposition.”

This is admirably frank: Newton is, as it were, putting his hands above the table and showing that he has nothing in them. Certainly, he needs such a proposition and appeals to it implicitly or explicitly all the time. But the question every enquiring mechanics student wants to pose is : “Is the ‘limit’ of a convergent sequence actually attained?” The modern definition of a limit artfully avoids the problem since the phrase “approaches nearer than any finite quantity” applies equally well to a zero or a non-zero difference (provided we can always exhibit such a difference when challenged). This is mathematically speaking perfectly satisfactory ─ but not physically speaking.

“I say that the ultimate ratio of the arc, chord, and tangent, any one to any other, is the ratio of equality”.

This stretches credulity a shade too far but follows logically enough if you accept lemma (I).

To my mind, if you are a realist, you either have to take on board Newton’s Lemma with all that it entails or propose a counter Lemma on the following lines:

“Quantities, and the ratio of quantities, which converge to equality do not ordinarily approach nearer to each other than a particular given difference, and thus do not become ultimately equal.”

This Lemma would itself rely on a ‘finitist’ Axiom such as the following :

“Every quantity such as length, time, force &c. has a minimum value which cannot be further diminished such as, for example, a smallest possible unit of length, the stralda, or the shortest possible interval of time, the ksana.”

This Axiom has its own problems but it enables one to avoid infinite regress and is surely more reasonable than belief in the ‘infinite divisibility of space and time’. Some contemporary physicists suggest that ‘space-time’ is ‘grainy’ ─ though few people have worked out the considerable conceptual and physical consequences of such an approach. SH 18/8/19

To recapitulate, the Number Conservation Principle combines two subsidiary Principles, the Disordering Principle and the Principle of Replacement:

Disordering PrincipleThe numerical status/cardinal number of a collection is not changed by rearrangement of the collection so long as no object is created or destroyed.

Principle of ReplacementThe numerical status/cardinal number of a collection is not changed if each individual object of the collection is replaced by a different individual object.

Taken together they give us Cantor’s characterisation of cardinal number as based on a ‘Double Negation’ ─
“We will call by the name ‘power’ or ‘cardinal number’ of M the general concept which, by means of our active faculty of thought, arises from the aggregate M when we make abstraction of its various elements m and the order in which they are given.”

For the moment I am not so much concerned about what cardinal number is ─ Cantor says it is a ‘general concept’ ─ as to what one needs to do (or to have) in order to create a number system, either individually or as a society. Cantor gives us a definition rather than a cognitive procedure or strategy, but his ‘definition’ is an extremely enlightening one. He is absolutely right to emphasize the ‘negative’ aspects of the path towards number; as Piaget puts it memorably, “Number results from an ignoring of differential characteristics”. It is almost as if we are invited to throw away as much information as possible and see what we are left with; in particular we must discard all distinction by type such as colour, weight, distance, kinship, gender &c. &c. The reason why ‘primitive’ peoples were so often reluctant to acquire numerical skills from missionaries or explorers was because they, rightly, sensed it would probably weaken their sensitivity to type distinctions, deemed to be more important.
To speak, as I do, of a number conservation principle makes arithmetic sound more like a natural science than a logical system or an exercise of the creative imagination, but this is intentional. Friedrich Gauss is reported as saying, “Mathematics is the queen of the sciences and number theory is the queen of mathematics”. But how can this be since all the sciences typically begin with observation and are subject to incessant and rigorous reality testing, whereas mathematics is supposed to be a self-contained logico-deductive system? But I believe Gauss was right: arithmetic is not just an indispensable aid to physics, it actually is physics, the most basic type of physics. Specific numerical properties of whole numbers, such as primality, divisibility and so forth, do not exist because of abstract rules laid down by 20th century mathematicians: they are given once and for all by Nature. A dishevelled heap of stones of approximately the same size either can be arranged as a rectangle or it cannot, and that is that. If I have a set of bags that will hold 7 pebbles of standard size, no more and no less, a mound of such pebbles can either be bagged up using these bags without any remainder, or it cannot be. The man-made rules for manipulating numbers are so designed as to lead to a system that mimics Nature and so can inform us about features of the physical world including those that are neither obvious nor readily observable. Number theory, as Gauss claimed, qualifies as the ‘queen’ of the sciences because the vast majority of its findings are not ‘approximately true’, ‘statistically true’, like practically all the propositions of physics but are completely true, 100 % true, and even, arguably, ‘true in all possible worlds’1.
An empirical/imitative theory of arithmetic downplays the role of man-made rules (since they are subsidiary) but, in return, finds itself obliged to appeal to notions such as ‘inherent capacity’ or ‘potential’. A given collection of objects, by its very existence, ‘has the capability’ of being arranged or divided up in a particular manner, whether we know this to be the case or not. A heap of stones, for example, can either be evenly arranged in two matching rows or columns, or it cannot be. And this ‘property’ is surely ‘out there’ in the real world, not in here in my mind. Such a middle course avoids the Scylla of mathematical Platonism and the Charybdis of Formalism. For this so-called ‘inherent capacity’ or ‘potential’ is not something that transcends the world of sense or is ‘eternally true’ but nonetheless goes beyond what is immediately observable2.

One can in point of fact envisage the Number Conservation Principle as a special case of the physical Space/Time Homogeneity Principle :
“The homogeneity of space implies that the same experiment carried out at different places on the earth (for example) gives the same result. The homogeneity of time means that all instants of time are physically equivalent e.g. the laws of buoyancy discovered by Archimedes many centuries ago can be reproduced today by recreating appropriate conditions for the observation. If these seemingly obvious properties had not existed, it would have been meaningless to carry out scientific observations. The absence of homogeneity of space would mean that the laws of physics would be different at different places and the lack of homogeneity of time would, in turn, have implied that the laws discovered today would not be valid tomorrow.” Saxena, Principles of Modern Physics p. 2.2

The Space/Time Homogeneity Principle(STHP) is undoubtedly required for the practice of science as we understand it today3, and the lack of it was precisely what held back ‘primitive’ societies from developing the natural sciences. From a magical perspective time and place are not irrelevant, not neutral, hence the importance given to performing rituals ‘at the right time of day’ (or year) and in the ‘proper place’ (temple, sacred site, altar &c.).
We obviously need STHP in arithmetic as well. A numerical experiment which arranges a collection N as a rectangle side c implies that N can always be arranged in this way, i.e. if some feature of a collection has been shown to be numerically true once, it is always going to be true. This is a very strong result. And the truth involved is not an empty logical truth like “All bachelors are unmarried males” but is a generalization from experience that it is more often than not impossible to doubt4.

The Number Conservation Principle, moreover, does not seem to have any possible exceptions ─ provided only that we are dealing with collections of discrete objects that do not fuse or merge when brought into close proximity. So it is, if anything, more fundamental than the Space/Time Homogeneity Principle. The strictly numerical properties of a heap of pebbles are not changed by putting them in a suitcase and carting them around in a jet plane, even loading them onto an accelerating rocket bound for the Moon. Short of turning to dust, such a group of pebbles retains the same divisibility properties even if it was originally been buried in a tomb in Egypt thousands of years ago. This shows how fundamental strictly numerical physical properties are5.
Moreover, the apparently innocuous Number Conservation Principle has certain important consequences both general and specific. The general or ‘metaphysical’ consequence is that, as stated, it allows us to advantageously sidestep both Platonism, which makes too much of mathematics, and Formalism which makes too little. Simply by bringing into existence more and more discrete objects, Nature is manufacturing ‘concrete numbers’ that really do have certain ascertainable properties, such as being prime or rectangular or triangular, even though Nature hasn’t the faintest idea of what it is doing. These ‘potentialities’ are, so I claim, out there in the world, not in my head or in a Platonic Wonderland. The rules we invent pertaining to numbers and their manipulation are thus not free creations of the human mind: they are constrained creations if they are to model accurately what actually is, or can be, the case. And the main reason why consistency is rightly prized in a mathematical model is because Nature actually does seem to be remarkably consistent, at least at the macroscopic level.
It is true that, ultimately, we are probably going to have to take on board certain ‘properties’ without which the whole system (system of Nature, not mathematics) would collapse. Such properties by rights ought to be the most elementary physical facts ascertainable, or if not, properties that clearly underlie the physical facts and point to them, as it were. They should inform us about how discrete objects combine together, form certain admissible configurations and not others for down to earth physical reasons. Basic mathematics should (and does) show us how and why certain fragments can, or cannot, be made to fit neatly together. Euclid begins the first of his three books on Number Theory with the so-called Euclidian Algorithm, an infallible procedure for distinguishing between numbers that are relatively prime (have no common factor except the unit) and those which do have a common factor greater than the unit. And this procedure at one and the same time tells us whether a collection can, or cannot be made into a proper rectangle, i.e. one with a side greater than the unit6.
It would seem that Euclid (or rather his unknown predecessors) were on to something. Even though the distinction between line numbers (= our primes) and rectangular numbers (= our composites) does not at first sight seem particularly significant, primality (or not) has enormous consequences in practice, so much so that it would hardly be going too far to say the distinction is as it were, ‘hard-wired’ into the universe7.
The Number Conservation Principle can be used as a step towards establishing Unique Prime Factorisation. But, first, it is necessary to say more about the alleged ‘numerical properties’ of heaps of pebbles, or any collections of discrete objects. There are two types of such properties, external or exogamous and internal or endogenous properties, to use scientific jargon. The chief exogamous property is the capability a collection has of being joined up to another collection of discrete objects, giving us the operation we call addition. Then there is the capability of a given collection of objects to be copied or replicated as many times as we desire, giving us the operation of multiplication.

The chief and in a sense only endogenous property of a collection of discrete objects is its capability of being divided up into numerically equivalent sub-collections with or without remainder 8. Now, the Number Conservation Principle (NCP) states that the divisibility properties of a collection, such as having a possible rectangular side greater than the unit (or not having one), are not destroyed, or miraculously brought into existence, by laying out the items of a collection in a particular way, then disarranging them and then laying them out in a different way.

In other words. if N = ♦♦♦♦♦♦ can be arranged as a rectangle side ♦ the collection N is not going to lose this ability.♦♦♦
♦ ♦ ♦
And, alternatively, the collection M = ♦♦♦♦♦♦♦ which, try as I will, I find cannot be laid out as a true rectangle, is not going to miraculously gain this ability after I disarrange it, try again two hours later and, to my surprise, succeed.
Now, suppose we have a rectangle N = p × q where p, q are line numbers (primes) and we reduce this rectangle to a structureless heap, and then try to arrange it again as a rectangle. We have not gained any new numerical properties such as a new possible side different from those we already know about, namely 1, p,q and (p × q). Therefore, if we are told that some non-unitary number labelled t is a possible side of N, we conclude immediately that t must be either p or q (or perhaps N itself). Why are we so sure of this? Because any other outcome would be a violation of the Number Conservation Principle. It is more or less in this manner that Euclid establishes Unique Prime Factorisation with a little help from earlier theorems.

One would, however, like to go a little further than the bare NCP in order to justify including certain cases and excluding others that look at first sight as if they violate the factorisation rule. To this end, I suggest the following Corollary to the Number Conservation Principle :

CorollaryIf a rectangular arrangement cannot be transformed into a different rectangular arrangement directly, then no other rectangular arrangement (apart from the strip) is possible.

This distinguishes between collections such as A and B

♦♦♦♦♦♦♦♦♦♦♦♦♦A♦♦♦♦B♦♦♦♦♦♦♦♦♦♦♦♦♦♦♦♦♦♦♦♦♦♦♦♦♦♦♦♦♦♦♦since B can be immediately rearranged as C♦♦♦♦♦♦♦♦♦♦♦♦♦♦♦♦♦♦♦♦♦♦♦♦♦♦♦♦♦♦♦♦♦♦♦♦
whereas A cannot be re-arranged as a different (proper) rectangle.

Why the difference? Because B has sides that can be broken up into numerically equivalent non-unitary sub-collections while A can only be broken up into units. But why should this matter? Because the ‘sub-division’ of a side, provided it is repeated so many times exactly, can become the side of a new rectangle. There is, then, a sort of two-way equivalence in the ‘capability of being divided up into equal sub-collections’ and the ‘ability to become a side of a rectangle’. This equivalence is natural enough, but should perhaps be introduced as a Postulate. Once accepted, it can be used again and again. SH

Notes1 The claim that “The sum of consecutive odd numbers starting at 1 can be arranged as a square” (e.g. 1 + 3 + 5 = 9 = 32) is not just ‘statistically true’ but seemingly admits of no possible exceptions. Yet this claim is telling me something about the external world that is far from obvious. Very few physical claims today are made with this sort of confidence. Even such an ‘obvious’ truth as “Heat always flows from a warmer to a cooler body and not vice-versa” is today said to admit of very unlikely but nonetheless possible exceptions. In modern parlance, the flow of heat from a warm body to a cooler, though highly improbable, is not ‘forbidden by the laws of physics’.2 It is much to be regretted that Aristotle, who was an empiricist and seems to have known at least as much mathematics as Plato, never had anything like the influence on the evolution of Western mathematics that Plato had.3 One could argue that the Space/Time Homogeneity Principle is not strictly true: carrying out an experiment such as Foucault’s Pendulum (where the results vary with latitude) near the equator would not give the same results as carrying it out the near the north pole. And it is today thought that certain constants, such as Hubble’s Constant or even the Constant of Universal Gravitation, may well have varied over time.
However, one understands what is intended: other things being equal, one position in ‘space’ is as good as another, one moment as propitious as another. The Space/Time Homogeneity Principle follows directly from the modern (post-Copernican) belief (or dogma) that there no ‘special spots’ from which to view the unfolding universe.4 Descartes’ criterion of ‘inconceivability’, though it has completely gone out of fashion, strikes me as valid despite its subjectivity. The Cartesians rejected Newton’s theory of gravitation because they considered attraction at a distance to be ‘inconceivable’ without a mechanism that Newton was unable to provide. This feature remained a fundamental weakness of Newton’s world-view and it was precisely concern about this point that ultimately led to Einstein’s relativistic schema that replaced it.5 Only if we have before us a continuous and continuously varying flux in which no discrete elements are ever permitted to emerge even for an instant, would the Number Conservation Principle not apply. And even in this case, it would not be violated: the conditions necessary for it its meaningfulness would not exist (since there would be no discrete objects).6 The Euclidian Algorithm looks to me as if it is extremely ancient, pre-dating the golden age of Greek mathematics and maybe even the Pythagoreans. For note that:
(1) the procedure which consists in seeing whether such and such a quantity ‘goes into’ another so many times without or with a remainder, makes much more sense envisaged in terms of bundles of beans or pebbles than in terms of the lengths of line segments (as Euclid gives it); and
(2) it is a numerical procedure which is not restricted to any particular ‘base’ thus quite possibly pre-dating the invention of numerical bases, or for that matter written arithmetic.7 One is sometimes tempted to consider Unique Prime Factorisation not as a theorem (which requires proof) but as a basic ‘given’, i.e. as an axiom.

8 The essential operation of arithmetic is, so I maintain, not addition but division. Arithmetic was, so I would guess, originally developed for the purpose of sharing out a band’s resources, especially food, and doing so equally. Hunter-gatherers seem to have been extremely egalitarian which is one reason, along with their limited amount of possessions, why they didn’t need much arithmetic. Egyptian temple officials, however, were, in the earliest times, paid in beer and bread because money (gold, jewels &c.) was used only for state transactions with the exterior. It is surely no accident that the Egyptians invented fractions and not only that, their scribes were, to judge by the problems covered in extant papyruses, literally obsessed with fractions. Furthermore, Egyptian society was hierarchical, so there was an extended need for, and concern with, unequal division which obviously requires a much more complex arithmetic. The curious use of a string of ‘unit fractions’ (fractions with a unit numerator) to express quantities such as 4/7 or 6/19 is perhaps a relic from to an even earlier era when everyone was remunerated in the same way.

Most people do not realize that Euclid devoted three books (VII – IX) of his Elements to Number Theory and established the fundamental theorems and procedures which are still in use today. This misconception about the Elements is understandable because Euclid’s presentation of ‘numbers’ as continuous line segments makes these books barely distinguishable from the books dealing exclusively with geometry. It would seem that there was an earlier Pythagorean Number Theory which envisaged numbers as discrete entities and even employed actual objects such as pebbles or beans to illustrate basic arithmetic properties. This is why we still talk of ‘square’ numbers, also ‘cubes’, though we have lost the vital distinction between ‘line’ numbers and ‘rectangular’ numbers — line numbers are numbers that can only be arranged in a line, i.e. our primes. The discovery of ‘incommensurables’, geometrical quantities such as the diagonal of a unit square, which could not be ‘measured’ exactly by the rational numbers opened up a rift between geometrical and numerical quantity which has never been satisfactorily bridged to this very day. Partly because of the influence of Plato, geometry and the study of continuous quantity displaced the study of the discrete: from the 4th century BC onwards ‘higher’ mathematics was almost entirely geometrical and was what the future philosopher needed to study whereas mere calculation was “the affair of craftsmen and tradesmen”, as Plato put it.
Today, of course, algebra has itself taken the place of geometry and mathematics has become something entirely removed from the contagion of the physical world, “a game played according to fixed rules with meaningless pieces of paper” as Hilbert famously described it. Mathematicians today shun the contagion of the real and indeed remind one of the early Christians who found it shameful to be born with a body. This, of course, is why mathematics today only appeals to professionals while lay people generally regard it with fear and loathing. I myself do not think it is either healthy or fruitful to remove oneself entirely from the delights and constraints of the actual world which is why I have formed the ambitious plan of evolving a concrete number theory which founds arithmetic on the possibility of actual operations with standard objects. It is completely ridiculous to introduce numbers, which we are all familiar with almost from the cradle, via the abstract axioms of fields and rings, axioms which Newton and Euler and Gauss did not know (since they were only elaborated in the latter 19th century). Arithmetic and the greater part of mathematics still relies on the natural numbers, i.e. the positive whole numbers + zero, and these do not constitute a group or a ring or a field or any such fancy structure. My plan is not just to return to Euclid but to attempt to recover the even earlier, largely lost, number theory of the Pythagoreans on which Euclid drew, a pre-Platonic number theory rooted in the real.
Nonetheless, some attempt at rigour must be made. My aim is by and large to establish most of the important theorems of Euclid VII – IX without any appeal to geometry but nonetheless using the Euclidian framework of Axioms, Postulates, Definitions and Theorems (or Propositions as Heath calls them) with however a different emphasis. If a theorem or proposition is true, this means that such and such operations can actually be performed or that certain arrangements are actually observable in nature, or could be, given very reasonable expectations or technical extensions of our senses.
Where to start? The best definition of (cardinal) number is that given by Cantor of all people :“We will call by the name ‘power’ or ‘cardinal number’of a set M the general concept which, by means of our active faculty of thought, arises from the aggregate M when we make abstraction of the nature of its various elements m and the order in which they are given.” Since my approach is pragmatic, I do not intend to waste time on the sterile question of whether ‘number’ is a human concept (as Cantor implies) or is embedded in nature. What Cantor tells us is what we need to do to elaborate a workable number system, any number system worthy of the name no matter how primitive. As he points out, there are twoprinciples involved the Disordering Principle and the Principle of Replacement.

Disordering PrincipleThe numerical status/cardinal number of a collection is not changed by rearrangement so long as no object is created or destroyed.

Principle of ReplacementThe numerical status/cardinal number of a collection is not changed if each individual object is replaced by a different individual object.

Together these two principles make up a sort of Number Conservation Principle since whatever ‘cardinal number’ is, this ‘something’ persists throughout all the drastic changes a set of discrete objects may undergo (provided no object is destroyed) ― just as, allegedly, a given amount of mass/energy persists throughout the various complicated interactions between molecules within a closed system.
It is not clear whether these two principles should be viewed as Definitions i.e. they tell you what we mean by the term ‘cardinal number’, or as Postulates since, after all, they are generalisations based on actual or imaginable experiments (the pairing off of random sets of objects with a chosen standard set). They are not ‘logical truths’ and not strictly speaking ‘common notions’ (Heath’s term for axioms).
We also need a third Principle, the Principle ofCorrespondence . It has a somewhat different status and is more like a true Axiom, i.e. something which we have to take for granted to get started but which is not directly culled from experience.

Principle of CorrespondenceWhatever is found to be numerically the case with respect to a particular set A, will also be numerically the case for any set B that can be put in one-one correspondence with it. By ‘what is numerically the case’ I mean such a property as divisibility which clearly has nothing to do with colour, size, weight and so forth. We certainly do assume the Principle of Correspondence all the time, since otherwise we would not gaily use the same rules of arithmetic when dealing with apples, baboons or stars: indeed, without it there would not be a proper science of arithmetic at all, merely ad hoc rules of thumb. But, though the Principle of Correspondence is justified by experience, I am not so sure that it originates there : it is such a sweeping assertion than it is more appropriate to call it an Axiom than anything else. It is certainly not testable in the way a scientific claim is, and not really a definition either.
I pass now to the basic Definitions which in Euclid precede the Postulates and Axioms — as a matter of fact Book VII does not contain any specific postulates or axioms for number theory though, as Heath notes, some are needed.

DEFINITIONS

Object: Continuous solid.

Discrete object: object surrounded by unoccupied space.

Unit: Chosen standard discrete object that cannot be split.

Collection. Plurality of units in close proximity to one another but which do not merge or adhere to one another.

Number: Collection in the above sense, or the unit itself.

Numerical equivalence: If a collection A can be made to cover B completely unit for unit with nothing left over, it is said to be numerically equivalent to B, or we write A =n= B.

Smaller than: If a collection A fails to cover B counter for counter it is said to be less than B, and we write A < B.

Greater than: If a collection A covers B counter for counter, with at least one counter left over, it is said to be greater than B, and we write A > B.

Joining: If a collection A is pushed up against B so that they form a single collection C, we say that A is joined to B producing C and write (A + B) ð C , or C = (A + B).

Subtracting: If, from a non-unitary collection A, a smaller collection B (which may be unitary or multi-unitary) is taken aside leaving a collection C, we say that B has been subtracted from A, and write A – B ð C and C = (A – B).

Replication: If a flat (two-dimensional) collection A is covered exactly unit for unit by a collection B, then B, once removed, is a replica of A. And if A is a three-dimensional collection with base a and number of layers h, then if B is built up on a numerically equivalent base with equivalent height, B is a replica of A.

Reduction: If a collection B is separated out into so many numerically equal collections, possibly unitary ones, then any of these collections A is called a reduction of B. (Note. The same result would be obtained if we had a collection B laid out with base A and uniform height h, and then we ‘reduce’ it by compression to a single level. But since the units cannot, by definition, be squashed together, this operation cannot strictly speaking be performed.)

Rectangular Collection: Collection that can be arranged as so many numerically equivalent rows or columns.

Linear (or prime) collection: Collection that can only be arranged as a line, i.e. with the unit as side.

Common Side: If collections A and B can be arranged as rectangles with a pair of numerically equivalent sides, they are said to have a common side. Alternatively, if A and B can both be fitted between two parallel lines width a units where a > 1, they have a common side.

Relatively linear or prime to each other: If collections A and B have no possible side in common apart from the unit, they are said to be relatively prime and we write (A, B) = 1.

Common Multiple: If every possible side of A and B is a possible side of C, then C is said to be a Common Multiple of A and B.

Complete Common Multiple or Product: The complete common multiple of A and B is the rectangle side A × side B .

Least Common Multiple: Smallest collection which is a Common Multiple of A and B.

Result Equivalence: If, given some arrangement of units B, it is impossible to say which of two or more operations evolved it from an initial arrangement A, then the operations are result-equivalent. And if the final configuration is evolved from two different starting points using the same or different operations, then the two ‘starting-points + operations’ are result-equivalent.

Ratio Equivalence: If a pair of collections (A, B) can be transformed into (C, D) by replication and/or reduction alone, at all times keeping the levels the same, then we say that the ratio of A to B is the same as the ratio of C to D and write A:B = C:D

In the next Post I will detail the basic Postulates required for Concrete Number Theory and establish the first theorems. The more fundamental theorems, which are those the most closely related to physical operations, will be singled out and named Basic Assertions. They are ‘proved’ by a combination of appeals to sense-impressions, the more direct the better, combined with some rational argument. But, whereas in ‘normal’ mathematics all arguments are strictly deductive, in Concrete Number Theory, inductive arguments are perfectly valid provided (1)a concrete example is given and (2) a possible generalization to cover all, or a multitude of cases, seems to have nothing going against it. Inductive ‘logic’ lacks the finality of deductive logic but it has the advantage that it can never degenerate into complete meaninglessness. One cannot have one’s cake and eat it: the inductive approach does mean that at least you taste a morsel even if you are not entirely sure that you will get hold of the entire cake. SH

In Lecture 7 of the excellent DVD course Great Thinkers, Great Theorems (The Great Courses No. 1471) Professor Dunham of Muhlenberg College discusses Archimedes’ Measurement of a Circle. Archimedes’ first Proposition in effect gives us the formula for the area of a circle ─ though this is not quite how Archimedes puts it. He writes“The area of any circle is equal to [that of] a right-angled triangin which one of the sides about the right angle is equal to the radius and the other to the circumference of the circle” This is ancient Greece, more particularly the ancient Greek ex-colony of Syracuse, and Archimedes had no algebra or modern notation ─ these innovations had to wait for Descartes to be born a millennium and a half later. Historians of mathematics never tire of telling us that the ‘Method of Exhaustion’, which Archimedes inherited from Eudoxus, is an unwieldy and clumsy tool compared to Calculus which up to a point it is. Nonetheless, Archimedes’ approach has several advantages over the contemporary textbook one. For a start, Archimedes’ proof makes sense, is intuitively acceptable and thus far more accessible to the intelligent layman, even to the intelligent child ─ the older Victorian style textbooks invariably used simplified ‘exhaustion’ arguments and that’s how I learned my basic geometry. In comparison, the algebraic Calculus proof, though shorter, is tiresome in the extreme and doesn’t seem to have much to do with real circles and areas ─ it makes one think of an immensely capable stage magician performing a series of tricks.

To Archimedes then. First off, Archimedes does not turn the area of a circle into an abstract algebraic function: he compares the circle to something that we can see and draw, namely a right angled triangle with known area ½ base × height where the height is the radius of a circle (any circle) and the base is the same circle’s circumference straightened out. Area = r × C) = ½ r C.
[Apologies but currently I have problems putting diagrams on this website, see later versions of this post.]
The proof is by double contradiction (as Professor Dunham puts it) and though these kind of proofs are often rather artificial, in this case the reasoning is both impeccable and even entertaining. Archimedes says in effect, “Let us suppose my claim is wrong, and that the Area of a circle is not equal to that of my triangle”. It might, for example, be less than the area of a circle. We now ‘exhaust’ the circle in the inimitable Greek manner by fitting regular polygons (many sided figures with all sides and matching angles equal) inside the circle. We can start with an equilateral triangle which fits exactly into the circle (how to do this is demonstrated in Euclid Book I), we then double it to produce a hexagon (which, amazingly has six outer sides equal to the radius), double that to make a 12-sider, 24-sider, 48-sider &c. &c. All these doublings can actually be performed by following instructions given in Euclid Book I using only straight edge and compass.
Clearly ─ because we can actually see it happening ─ the gap between the total area of the inscribed polygon and the area of the circle gets less each time. Next, very reasonably, Archimedes asks us to accept that, in principle at least, by constructing polygons with an ever growing number of sides we can make the difference between the area of the polygon and the area of the circle as small as we please. (Difficulties with computer graphics stop me giving too many diagrams at this point, see updated versions of this post.)
Now the polygon is made up of identical triangular sections where the ‘base’ is the outer side of each section and the ‘height’ is the length of the perpendicular from the centre to this base, technically known as the apothem. Note that the two other sides of the triangular section are radii and, important point, the radius is always greater than the apothem since the apothem is the shortest route to the base. Now, every triangular section of the (regular) polygon is the same, so, if the polygon consists of 6 triangles, the total area will be 6 × ( ½ base × height) or, to put it another way, 6 bases × ½ h . The length ‘6 bases’ is the length of the perimeter of the polygon, the distance you would have to go if you walked all round it. We now increase the number of sides to 12, 24, 48, 96…. and so on. At each stage, what we end up computing to give the area of the polygon is the perimeter of the inscribed polygon multiplied by half the height, or ½h × P where P is the Perimeter. [Apologies but currently I have problems putting diagrams on this website, see later versions of this post.]
What about the original right angled triangle? The area of Archimedes’ right angled triangle is½ base × height = ½ C × r ─ where C is the circumference of the circle and r is the radius. Archimedes has suggested, for sake of argument, that the area of the right angled triangle is less than the area of the circle. Now, since the area of the inscribed polygon can be made as close to the area of the circle as we wish, it should be a better fit than the area of the right angled triangle. So, according to this supposition, we have the inequality Area Triangle < Area n-gon for large n < Area circleHowever, this can’t be right because the height of the triangle is r, the radius of the circle, and the height of any triangular section of any n-gon is the apothem which is always less than the radius. Also, the circumference of the circle, the base of the triangle, is clearly greater than the perimeter of any n-gon ─ because the length of a curve drawn between two points is always greater than that of a straight line.
We thus have a contradiction so we must reject the hypothesis that the area of the triangle < area of the circle.
Suppose, then, the area of the triangle is greater than the area of the circle. This time Archimedes’ constructs a series of regular polygons that fit neatly around the circle, i.e. the bases of each triangular section lie outside, not inside the circle. The number of sides is again increased without limit, making the difference between the area of the circumscribed polygon and the area of the circle as small as we wish. This time we find that the height of each triangular section is indeed the length of the radius but the perimeter is undeniably greater than the circumference of the circle. So this polygonal total area cannot be greater than the area of the right-angled triangle because the base of the triangle is equal to the circumference. Again a contradiction and the second hypothesis must be rejected.
Archimedes now delivers the coup de grace. Since the area of the right angled triangle is not smaller than the area of the circle and is not greater than the area of the circle, the only possibility left is that it is exactly equal to the area of the circle. But this is what he wanted to prove.
But what is the area of the right angled triangle in modern terms?
It is ½ radius × Circumference which in our terms (not his) comes to ½ r (2pr) = pr2 (since the ½ and 2 cancel). Archimedes does not write π which though a Greek letter was not given its modern mathematical meaning until the 18th century of our era (by Euler). But neither does he think π. For the Greeks curved and straight lines (along with certain pairs of straight lines) were incommensurable ─ could not strictly be compared because they lacked a basic common ‘measure’. Archimedes restricts himself to saying that the ratio of the diameter of a circle to the circumference (our p) lies between two limits: it was less than (3 + 1/7) and greater than (3 + 10/71). This is not the same thing as equating the circular ratio of two lengths to the irrational number we know today as π ─ since the Greeks did not use irrational numbers which for them were not true numbers. In effect, modern mathematics does (or at any rate appears to do) what the later Greeks knew to be impossible ─ it ‘squares the circle’, a figure that cannot be squared.
The only point in Archimedes’ excellent piece of reasoning that is somewhat questionable is the following. By making the circumference the base of his triangle, Archimedes is in effect straightening it out, something that is in practice impossible to do without falsifying its true length, if only by a little. Even when using a piece of string, the molecules are inevitably going to be more extended when measuring the circumference and more compressed when measuring a straight line. This incidentally is very much the sort of objection that already in ancient Greek times certain sophists and epicureans made against the new-fangled discipline of higher geometry so enthusiastically promoted by the rival school of Plato and his followers.

The Calculus derivationThe modern derivation of the formula for the area of a circle goes like this. Armed with a co-ordinate system (which Archimedes did not possess) we draw a circle centred at the origin and assess the ‘area under the curve’ ─ or rather the area of the first quadrant which is all we need (because we eventually multiply it by 4). By Pythagoras, r2 = x2 + y2where r is the radius of the circle, or y = √r2 – x2. We want the inner area between the right side of the x axis and the top part of the y axis, i.e. the limits of the integral are x = 0, x = r. To get out this definite integral you have to reverse the standard derivative of arcsin or sin−1, a formula that is impossible to remember. The integral turns out to be
[ ½ x √r2 – x2 + ½ r2 sin−1 x/r] taken between the limits x = r and x = 0.
This expression is guaranteed to make the non-mathematical reader give up in disgust and go to the pub instead, and, although I have on occasion taught Calculus to school age kids, I dislike the inverse trigonometrical functions and had to look up the answer in a book and differentiate the result to see if I got back to √r2 – x2. More to the point, this hodgepodge seems to have nothing at all to do with circles or areas of anything at all though, if you do feed into the expression x = 0 and take this away from what you get with x = r you (eventually) end up with ¼ πr2 which, since this is a quadrant, makes the total area four times this or πr2.
But how do I know this Calculus result is right? Only by accepting various assumptions which underpin Calculus, assumptions which were, in the days of ‘infinitesimals’, extremely dubious as Bishop Berkeley pointed out at the time and have only been made rigorous by disconnecting Calculus completely from physical reality. The Greeks (correctly) argued that curved lines and straight lines are different species and that areas bounded by curves can be approximated to any desired degree of precision but in general never the twain shall meet. The fact is that the circle cannot be squared and Calculus is being extremely helpful but disingenuous when it gives the area as exactly πr2. Like all irrationals, the ‘number’ π is not a real quantity but a numerical limit that real quantities can approach but never attain. There is no such thing as an object whose length is π, and you cannot perform any action a π number of times.
The ancient Method of Exhaustion rests on very different assumptions about the physical world than Calculus does and on much more sensible ones. Nevertheless, we needed Calculus in the bad old days because the labour of attaining a desired level of precision was horrendous (even after the invention of logarithms which reduced it considerably). Calculus, in its trendy post-Weierstrass version now obligatory in colleges of higher education, is admittedly an ingenious piece of pure mathematics which, by fancy footwork, effectively sidesteps issues about ‘infinitesimals’. But it is a piece of logic rather than mathematics. And anyway the writing is on the wall already for Calculus. Now that we have supercomputers, the trend is not to bother looking for analytic solutions but simply to slog it out numerically. Numerical Analysis, the art of progressive approximation, is rapidly becoming more important than Functional Analysis. In any case, the vast majority of differential equations (equations dealing with change and motion) are insoluble analytically so there is all too often no alternative to approximate methods. Practically speaking, we no longer need all this wretched philosophical lumber of ‘infinitesimals’ and infinite processes: the eminently sensible ancient Greek Method of Exhaustion, which successfully avoided actual infinity altogether, is, against all the odds, making a triumphant come-back . Traditional Calculus is a mighty cultural achievement and a necessary bridge to the present and what lies ahead, but one day it will be probably put in the same class as those impressive looking species that eventually went extinct because they were unable to cope with real-life conditions. SH

“A mental creation that evolved to study objects in the world”1 ─ this sounds to me like a pretty good description of mathematics ─ or at any rate of mathematics prior to the late 19th century. It highlights the two most important features of mathematics:

(1) that it is a (human) creation and,(2) that it is not a free creation ─ it cannot be if its aim is to study ‘objects in the world’.

(2) in effect means that mathematics is, or rather was, subject to serious constraints. What sort of constraints? If we examine the two earliest branches of mathematics, arithmetic and geometry, we see that the principal constraints are discreteness and distancing. Discreteness because arithmetic sees the world as made up of lots of little bits that can be counted ─ if everything was mixed up with everything else like the ingredients in a cake arithmetic would not give sensible results (and would not be needed). Secondly, experience tells us that whatever is happening here is not simultaneously happening over there, i.e. that objects and events are separated by something, thus geometry, metric spaces, causality, anything that depends on spatial separation.

Viewed thus, the two earliest and (arguably) still the two most important branches of mathematics depend either on separateness or separability : in the first case, that of arithmetic, the focus is on the objects themselves, in the second, the focus is more on what lies between and around the objects. Practically all pre-Renaissance mathematics was in effect the study

No hint of movement so far ─ as we know the Greeks, though they initiated both statics and hydrostatics baulked at the creation of dynamics, the study of objects in motion and, by implication, of the unseen forces that give rise to motion.

Before being represented by a numeral, all entities and collections of entities must be stripped of all characteristics such as shape, size, colour, substance, purpose, origin &c. in imagination, if not in fact. All internal group properties such as rank, proximity, natural affinity and so on must also be destroyed. To become numbers, all groups of entities must be both DEPERSONALIZED and DESTRUCTURED. In effect,

DISTINCTION BY NUMBER CAN ONLY BE ACHIEVED BY ABOLISHING DISTINCTION BY TYPE.

This explains the surprisingly rudimentary nature of number concepts amongst hunting/food-gathering peoples, and their stubborn resistance to the introduction of more advanced number systems and number concepts by missionaries. This resistance is not to be attributed to a lack of intelligence since the complex language structures, imaginative myths, pictorial sense and elaborate rituals of such peoples show that they were capable of first-rate cultural achievements. No, the reason for this resistance to number is deep and essentially well-founded since for such peoples it would have been a rash move to automatically prefer distinction by number to distinction by type.

For classification according to type is absolutely essential to the hunter/food-gatherer: he or she must make quick-fire radical distinctions between plants that are comestible and poisonous, animals that are harmless or dangerous, strangers that are hostile or friendly &c. &c. ─ and errors can easily lead to death of the individual and even extinction of the tribe. But counting objects is of little utility: what is the point of attributing a number sign to objects that are in front of you every day of your life? Do you know how many suits of clothes or dresses you own? How many rooms there are in your house or flat? Arithmetic only becomes significant when it is essential to know when to sow or reap, when trade is extensive and, above all, when a state official needs to assess a whole country’s resources. It was the Assyrians and the Babylonians who developed arithmetic just as it was the founder of the short-lived Ch’in Dynasty, Ch’in Shih Huang Ti (he of the terracotta warriors), who imposed the metric system on his citizens nearly two thousand years before Napoleon did and who likewise standardized weights and measures throughout his vast Empire. SH 17/1/19

Nature is not deliberately mathematical or even numerate : if certain numbers keep coming up — and few do systematically — there is generally some physical or biological reason for them 1.
In this sense it is perfectly true that numbers, or at any rate number systems, are human creations but they are firmly based on features of the natural world that really exist objectively. One might say, to paraphrase Guy Debord, “Number has always existed but not always in its numerical form” 2.
So how do we develop a number system? What are the minimal requirements?
Two, and as far as I can see, only two abilities are necessary to develop a number system :

The ability to distinguish between what is singular and plural, i.e. recognize a ‘one’ when you see it;

The ability to carry out a one-one correspondence (pairing off).

All the mathematicians who have developed abstract number systems, for example Zermelo and von Neumann, had these two perceptual/cognitive abilities — otherwise they would have been denied access to higher education and would not even have been able to read a maths book. Animals seem to have (1.) but not (2.) which is perhaps the reason why they have not developed symbolic number systems (though a more important reason is that they did not feel the need to). Computers are capable of (1.) and (2.) but only because they have been programmed by human beings.

What is number? One could describe ‘number’ as the ‘property’ that results when we have done away with all other distinctions between sets such as colour, weight, position, shape and so on. This is not much of a definition but it does emphasize the curious fact that number is more of a negative rather than a positive property since it results, as Piaget says, “from an ignoring of differential qualities”.

But, notwithstanding the difficulty of saying what exactly number is, practically speaking there is a perfectly simple and universally applicable test which can decide whether two sets of discrete objects are numerically equivalent or not, i.e. can be validly allocated the same number label. If I can pair each of them off with the same standard set of objects or marks, the two sets are numerically equivalent, if I can’t they are not. Of course, today if I want to assess the ‘number’ of chairs in a room, say, I associate the collection with a number word, seven or four or six as the case may be, but underlying this is a pairing with a standard set. As a matter of fact I find that, though I use the number words one, two, three….. when counting objects, I still find it necessary to use my fingers, either by pointing my finger at the object or pressing it against my side, one press, one object. And the umpire in a cricket match still uses stones or pebbles : one ball bowled, one stone shifted from the right hand to the left. It is not that the finger or stone pairing off is valid because of our ciphered numerals but the reverse : our written or spoken numerals ‘work’ because underlying them is this pairing off of items with those of a standard set.

Now, one could actually derive the Cantor definition of cardinal number — “that which results from abstracting from a set the order of appearance of the elements and their specific character” — from what happens when I apply my test. If I rearrange the objects I am supposed to be counting, does that make any difference to the ‘number’ representing the sum? No. Because if I could pair off the original collection with items from a standard set, such as so many pebbles or marks, I can do the same after rearrangement. Does the actual identity of the objects matter? Apparently not, since if I replace each original item by a completely different item, I can still pair off the resulting set with my standard set (or subset).

We thus arrive, either by reflection or simply by applying the test, at the two basic numerical principles, the Disordering Principle and the Principle of Replacement
Disordering PrincipleThe numerical status/cardinal number of a collection is not changed by rearrangement so long as no object is created or destroyed.

Principle of ReplacementThe numerical status/cardinal number of a collection is not changed if each individual object is replaced by a different individual object.

Together these two principles make up a sort of Number Conservation Principle since whatever ‘cardinal number’ is, this ‘something’ persists throughout all the drastic changes the set undergoes just as, allegedly, a given amount of mass/energy persists throughout the interactions between molecules within a closed system.
These two principles may either be viewed as Definitions i.e. they tell you what we mean by cardinal number, or as Postulates since they are the generalisation of actual experiments (pairing off sets with a chosen standard set). They are not ‘logical truths’ and not strictly speaking axioms.

The Principle of Correspondence has a somewhat different status and is more like a true Axiom, i.e. something which we have to take for granted to get started at all but which is not directly culled from experience.

The Principle of Correspondence

Whatever is found to be numerically the case with respect to a particular set A, will also be numerically the case for any set B that can be put in one-one correspondence with it.

By ‘numerical’ features I mean such things as divisibility which has nothing to do with colour, size and so forth. We certainly do assume the Principle of Correspondence all the time, since otherwise we would not gaily use the same rules of arithmetic when dealing with apples, baboons or stars : indeed, without it there would not be a proper science of arithmetic at all, merely ad hoc rules of thumb. But, though the Principle of Correspondence is justified by experience, I am not so sure that it originates there : it is such a basic and sweeping assertion than it is more appropriate to call it an Axiom than anything else. Note that physical science uses a similar principle which is today so familiar that we take it for granted though it is far from ‘obvious’ (and possibly not entirely true), namely that “what is found to be physically the case for a physical body in a particular place and time is the case for a similar body at a completely different place and time”. Newton’s law of gravitation is not just true here on Earth but is assumed to be true everywhere in the universe — a fantastic generalization that many scientists at the time thought unwarranted and arbitrary.

These principles do not by any means exhaust the assumptions we implicitly make when we use or apply a Number System : indeed, if we listed all of them we could probably fill a sizeable volume. For example, we continually assume that there is a physical reality ‘out there’ to number in the first place (which solipsists and some Buddhists deny), that there are such things as discrete objects (which philosophic monists and in some of his writings even Einstein seems to deny) and so on and so forth. But these ‘axioms’ are best left out of the picture : they underlie most of what we believe and are not specific to numbering and mathematics.

Notes

1 This is (perhaps) not true of the basic constants such as the gravitational constant or the fine structure constant : they seem to be ‘hard-wired’ into the universe as it were and there seems to be no special reason why they should have the values they actually do have, unless one accepts the Strong Anthropic Principle. In theory it should be possible to deduce the values of basic constants from a priori principles but to date attempts to do this, such as Eddington’s derivation of the number N, the number of elementary particles in the universe, have not been very successful to say the least. One could argue from ‘logical’ considerations that there must be a limiting value to the transmission of electro-magnetic signals but there is no apparent reason why it should be 3 × 108 m/sec

2 The quotation I have in mind is, “L’histoire a toujours existé mais pas toujours sous sa forme historique” (‘History has always existed but not always in its historical form’) from La Société du Spectacle by Guy Debord. The phrase sounds wonderful but means very little.