This question does NOT concern the RIGOR, or lack thereof, of the early calculus. Rather the question is of its CONSISTENCY.

George Berkeley wrote in 1734 with reference to the early calculus that such a method is "a most inconsistent way of arguing, and such as would not be allowed in Divinity". This passage is quoted by William Dunham in 2004. Dunham concludes: "Bishop Berkeley had made his point. Although the results of the calculus seemed to be valid ... none of this mattered if the foundations were rotten". See page 72 of http://books.google.co.il/books?id=QnXSqvTiEjYC&source=gbs_navlinks_s

On the other hand, Peter Vickers in 2007 challenged "The ubiquitous assertion that the early calculus of Newton and Leibniz was an inconsistent theory" at http://philsci-archive.pitt.edu/3477/ (soon to appear in book form at Oxford University Press), and concluded that this only holds in a limited sense and "can only be imputed to a small minority of the relevant community".

Was the early calculus consistent as far as most practitioners were concerned, as Vickers contended, or was it a most inconsistent way of arguing, as did Berkeley and Dunham?

Note 1: Berkeley claimed that calculus was based on an inconsistency that can be expressed in modern notation as $(dx\not=0)\wedge(dx=0)$. Thus he was using the term "inconsistent" in much the same sense it is used in modern logic.

It's not clear to me "early calculus" was formal enough to talk about whether or not it was consistent. There were things people did, so you can talk about the actions of people being consistent or inconsistent makes sense on a behavioral level, but by that standard they probably were fairly consistent.
–
Ryan BudneyMar 19 '13 at 19:20

2

@Ryan Budney: Authors like Boyer, Grabiner, as well as authors of calculus textbooks routinely claim such "inconsistency" ($dx\not=0$, $dx=0$, Q.E.D.). It could be that the early calculus was not "formal enough" as you suggest, but what do we make of such repeated claims in the literature? To illustrate a concept from the early calculus that was clearly inconsistent, consider Nieuwentijt's idea of an infinitesimal of the form $\frac{1}{\infty}$ that was supposed to be nilpotent. This is inconsistent by any modern standard, unlike Leibnizian calculus where the question is debatable as per above
–
katzMar 19 '13 at 19:30

29

At the root of this question is an implicit and very ahistorical assumption that the word "consistent" was used by Bishop Berkeley in the same sense that it is used by modern logicians.
–
Lee MosherMar 19 '13 at 19:32

2

The question is not clear, as the word "consistent" has multiple meanings. One must separately consider consistency of techniques and consistency of the arguments used to justify techniques. The practitioners of the early calculus viewed it as a set of legitimate methods for obtaining results that agreed with other methods and physical observation. They also treated certain quantities as both zero and non-zero in proofs, which is inconsistent with modern proof practice...
–
Ben BraunMar 19 '13 at 21:59

7

I find the question very interesting, and I am looking forward to reading answers posted by those with expertise in mathematical history.
–
Joel David HamkinsMar 19 '13 at 22:20

9 Answers
9

Coming back to the B.Berkeley critics, there is a common denominator of all known getarounds, both the two mainstream ones (Wstrass and NSA) and exotic ones like the SDG interpretation.

That is, one considers an extension - call it $R^+$ - of the true reals R and a map
$R^+ \to R\cup \{\infty\}$ - call it the valuation map. For instance,

1) $R^+$ consists of all convergent infinite real sequences and the valuation map is the
`` taking the limit'' map

2) $R^+$ is a nonstandard extension of $R$ and the valuation map is the ``standard part'' map ($\infty$ for infinitely large objects)

3) Nilpotent or any other applicable exotics.

It occurs that the evaluation map cannot be a homomorphism, it always lacks something. For instance the value of a non-0 infinitesimal is 0, the value of its inverse is $\infty$, but $0\cdot \infty=1$ makes little sense in $R$.

This is I believe the only sound way to view the medieval controversies around infinitesimals. That is, accept that a non-0 infinitesimal is not equal to the real number 0, it just has the value 0. Maybe, a devoted scholar of Leibnizz, Euler, etc. (although there is no much of etc. after Euler!) can find a support of this point of view.

Obviously, a modern mathematician would ask for either a concrete mathematically defined model of both $R^+$ and the valuation map - and the two mainstream such models are listed above, with perhaps more yet to come under category 3 - or at least to set it up in the form of calculus of propositions, with rigorous rules of inference albeit w/o a fixed interpretation of objects.

Thanks, Vladimir. Indeed, Leibniz provided consistent rules of inference in terms of his Transcendental law of homogeneity ("discard the negligible term"), without of course providing any interpretation of the number system itself (which had to await Hewitt, Los, and Robinson).
–
katzApr 8 '13 at 12:49

1

There can be no such valuation map in Synthethic Differential geometry, and neither is it the case that the smooth real line is an extension of "true reals" (whatever those are supposed to be), at least I do not see how that could be, since the smooth real line is not even Cauchy complete.
–
Andrej BauerApr 8 '13 at 12:53

4

This is not the only sound way to view the controversy. Another sensible way is to go intuitionistic. The controversy is then resolved by realizing that (a) not all infinitesimals are zero and (b) given any infinitesimal, it is not the case that it is distinct from zero. There is no valuation map, or a sharp distinction between "standard" reals and infinitesimals, only weird (but sound!) intuitionistic sort of half-existence of infinitesimals.
–
Andrej BauerApr 8 '13 at 12:58

1

>>given any infinitesimal, it is not the case that it is distinct from zero. This is so funny! I mean, you claim in particular that any argument starting from "let e be a small positive infinitesimal number (hence non-0)". is wrong from the beginning? What then about the Euler factorization of sin which starts from an inf. large number i and then involves infinitesimals like 1/i - is it "not the case" that 1/i is definitely "distinct from zero"?
–
Vladimir KanoveiApr 8 '13 at 19:29

The question is not precise enough to get a definite answer, but not for the reason most people say in commentaries. The problem does not lie in the ambiguous meaning of "consistent" (which just means
"free of contradictions", which was as clear then as now), but in the meaning of "way of arguing".
What we do have is a corpus of results from the founders of calculus (say Pascal, Descartes, Fermat, Newton, Leibniz), and a corpus of arguments they used to justify them. The corpus of results is certainly a corpus of true results, so is consistent, and was certainly recognized as such even by Berkeley (to my knowledge, the first serious contradiction involving results of calculus
came 150 years after the founding period with Cauchy's theorem that a limit of continuous functions is continuous,
combined with counter-examples from Fourier's theory, so is completely out of our scope).

Now is the corpus of arguments used by our fathers "consistent"? This question does not make real sense, because "arguments" are not results, and are not "true or false", either individually or in groups. They are, then and now, incomplete developments aimed at convincing one's that some results are true. The thing one can say is that, however shaky the arguments seem to us, they were used by these founders to prove only true results. In this very weak sense, their arguments were consistent.

Now, is the "way of arguing" of our fathers consistent? Again, the meaning of this question is problematic, because there is no unique way to deduce from a finite set of examples was what the
"way or arguing" of our founding fathers. What is sure is that a naive reader of their arguments,
trying to guess, "by induction" in the sense of natural sciences, what was the way of arguing of these people, and trying to apply this way of arguing to get new results,
would easily come across contradictions (even not so naive readers, such as Cauchy,
eventually did so). Actually, it took almost 200 years for mathematicians to find a "consistent
way of arguing" in which the arguments of the founder can be reformulated without too much distortion: it is the $\epsilon,\delta$ approach of Weierstrass and others. It took almost one more century to construct a second consistent approach, which perhaps has the slight advantage on the classical one to reformulate with even less
distortion the arguing of the founders of calculus. Yet priority has a great weight in science, and this is the most obvious reason for which the non-standard analysis has not supplanted the traditional one.

I want to finish by a side, wittgensteinian, remark: we are not in a qualitatively different
situation than our founding fathers were: there is no way to be sure that our current "way of arguing"
is consistent, because there is no way to be sure what our current "way of arguing" exactly is.
By this I am not thinking at all at the problem that since Gödel we doubt that ZF or any other system is consistent, but to the much most basic problem that even with a "certainly consistent set of axioms" (say the axioms of the theory of groups, to fix ideas), we are not really sure what our way of arguing (that is the logico-formal rules which allow us transform statements into other statements, from axioms to theorems) is. To be sure, we mathematicians now take great care to begin a treatise
by explaining carefully those "formal rules" or reasoning. Yet this formal rules
use notions that are not completely clear (such as the notions of "intuitive integer") and skill that we can not be sure to posses (for example the capacity to recognize, in a finite expression, all occurrence of a given free variable). What we do is we see other people working using those rules,
we try to do the same by imitation, getting punished if we do it wrong, and after some times
do not make mistake anymore -- so we deduce that we understand the rules as the others do.
But there is no way to be really sure of that.

I do not take Joel's answer as an invitation to wallow in a sea of despair. I take it as a realistic description of what our brains do when learning and doing mathematics. There is no sure way of knowing what goes on in our wetware, but we are very good at doing whatever it is that we do.
–
Lee MosherMar 20 '13 at 15:37

3

Even Cauchy's mistake was arguably no mistake; it depends on what Cauchy meant by convergence (of a sequence of functions). When presented with the counterexamples (which he knew well), Cauchy denied that they converged everywhere, because they failed to converge for certain variable quantities. In particular, $\sum_{k=1}^n \frac{sin(k x)}{k}$ fails to converge (in $n$) when $x$ is the infinitesimal variable $1/n$. This is hard to interpret in either epsilontic analysis or nonstandard analysis, but it's not obviously inconsistent.
–
Toby BartelsMar 21 '13 at 14:30

2

I don't take very seriously the assertion that "absolute rigor is never attained". Within libraries of formalized mathematics as mechanically checked by proof assistants like Coq and Mizar, I would say that absolute rigor is effectively attained in all but a pretty academic sense, or at least we could say that improvements in rigor over what has thus been attained are unlikely. The great 20th century achievement of the formalization of the logical calculus is not to be underestimated.
–
Todd Trimble♦Mar 23 '13 at 14:58

2

Toby, quite interesting post on Cauchy's mistake (or non-mistake). I knew the analysis of this episode by Lakatos (whom I admire enormously, by the way) in Proofs and Refutations, but didn't know he changed his position a few years laters on that subject. I will try to read his second text, as well as Cauchy's 1853 "fix of his proof".
–
JoëlMar 23 '13 at 20:52

I found a copy of the relevant passage from Berkeley's works at this web site. I have cut and pasted from that site, and I have reformatted the mathematics; apologies to the good Bishop for any alterations in meaning.

XIV. To make this Point plainer, I shall unfold the reasoning, and propose it in a fuller light to your View. It amounts therefore to this, or may in other Words be thus expressed. I suppose that the Quantity $x$ flows, and by flowing is increased, and its Increment I call $o$, so that by flowing it becomes $x + o$. And as $x$ increaseth, it follows that every Power of $x$ is likewise increased in a due Proportion. Therefore as $x$ becomes $x + o$, $x^n$ will become $(x + o)^n$: that is, according to the Method of infinite Series,
$$x^n + nox^{n-1} + \frac{n^2-n}{2} o^2 x^{n-2} + \text{ etc.}$$
And if from the two augmented Quantities we subduct the Root and the Power respectively, we shall have remaining the two Increments, to wit,
$$o \text{ and } nox^{n-1} + \frac{n^2-n}{2} o^2 x^{n-2} + \text{ etc.}$$
which Increments, being both divided by the common Divisor o, yield the Quotients
$$1 \text{ and } nx^{n-1} + \frac{n^2-n}{2} ox^{n-2} + \text{ etc.}$$
which are therefore Exponents of the Ratio of the Increments. Hitherto I have supposed that $x$ flows, that $x$ hath a real Increment, that $o$ is something. And I have proceeded all along on that Supposition, without which I should not have been able to have made so much as one single Step. From that Supposition it is that I get at the Increment of $x^n$, that I am able to compare it with the Increment of $x$, and that I find the Proportion between the two Increments. I now beg leave to make a new Supposition contrary to the first, i. e. I will suppose that there is no Increment of $x$, or that $o$ is nothing; which second Supposition destroys my first, and is inconsistent with it, and therefore with every thing that supposeth it. I do nevertheless beg leave to retain $nx^{n - 1}$, which is an Expression obtained in virtue of my first Supposition, which necessarily presupposeth such Supposition, and which could not be obtained without it: All which seems a most inconsistent way of arguing, and such as would not be allowed of in Divinity.

It looks to me that Berkeley's argument amounts to an argument raised by every discerning student in a nonrigorous first semester calculus course: ``Is the increment zero? or not zero? How can it be both? That's inconsistent!'' In which case I would invite the good Bishop to come to my office hours where I would introduce him to $\epsilon$, $\delta$ proofs.

I bet I could even convince the Bishop that Divinity would allow it: "Suppose the Devil gives you any $\epsilon > 0$. This $\epsilon$, although positive, might be very, very, very small, as small as the Devil likes...".

See my answer for transcription of the argument into modern syntetic differential analysis.
–
Andrej BauerMar 20 '13 at 15:17

4

@Lee: if you get the Bishop in your office, see if Dr. Johnson is also free to join in the discussion...
–
Yemon ChoiMar 20 '13 at 18:58

@Lee Mosher: Every discerning student would ask such a question, and we generally have answers. But Leibniz arguably already had an answer to Berkeley's question. The answer is given by Leibniz's Transcendental Law of Homogeneity. This allows one to discard $dx$ without setting it equal to zero. What Hewitt, Los, and Robinson showed is that Leibniz can be formalized even without $\epsilon,\delta$. There is a debate going on about this but the math community can form its own opinion rather than relying on historians which often operate with outdated conceptual frameworks inadequate to the task.
–
katzApr 3 '13 at 13:14

Contrary to Andrej Bauer’s contention, seventeenth-century calculus looks very little like SDG. Unlike in SDG, the integrals were construed as infinite sums, the intermediate value theorem was assumed to hold for continuous curves and, more to the point, for the most part the infinitesimals that were employed were invertible rather than nilpotent. For a while, the Dutch mathematician, Bernard Nieuwentijt, in his debate with Leibniz, argued in favor of the use of nilpotent infinitesimals, but eventually came to believe that his attack on Leibniz was ill-founded and returned to the then standard use of invertible infinitesimals. Of course, I’m not suggesting that nilpotent infinitesimals were not used—they were from time to time—but only that their use was not the main view. After all, following Leibniz, most mathematicians wanted their infinitesimals to behave like real numbers.

Nilpotent infinitesimals along with invertible infinitesimals were employed by a number of differential geometers in the nineteenth century and entered mainstream mathematics around the turn of the twentieth-century (in systems of dual numbers), when geometers such as Hjelmslev and Segre became interested in geometries in which two points need not determine a unique straight line, and Grothendieck (and others) later employed them in algebraic geometry.

I suspect that the misconception that seventeenth-century calculus looks like SDG can be traced in part to John Bell’s wonderful expository writings on SDG. Bell was taken to task for this by the historian-mathematician Detlef Laugwitz in his otherwise very positive review (for Mathematical Reviews MR1646123 (99h:00002)) of the first edition of Bell's A Primer of Infinitesimal Analysis (1998). Moreover, I am not aware of any of the many serious writings on the history of the calculus that supports the view suggested by (my friend) John.

Response to Mikhail Katz:

Mikhail: Fermat’s work was one I had in mind when I said nilpotent infinitesimals were used from time to time. However, his work, which was largely concerned with tangent constructions and lacked generality, predates the work of Newton and Leibniz, never caught on, and is not characteristic of the mainstream approaches to the calculus of the 17th century, which is what I said I was talking about. Moreover, Fermat’s work is notoriously unclear and, by my lights, the similarities with SDG are vague at best.

Many thanks, however, for the reference to Cifoletti’s work, which I will take a look at. I hasten to add, however, that the following passage from the Mathematical Reviews review of the work, which you yourself cite, does not inspire confidence.

“In the second part of the book, the author embarks on an investigation of the link between modern synthetic differential geometry, originally proposed by F. W. Lawvere in 1967 and afterwards largely developed by Lawvere and other mathematicians, and Fermat's mathematics.

In many situations, for the most part informal ones, Lawvere himself and other mathematicians working in this research field expressed their feelings that there had to be some kind of affinity between synthetic differential geometry and seventeenth-century mathematical practice. The author has tried to make explicit these general feelings, but this part of the book is mathematically weak and somewhat naive.

The best example is footnote 29, page 208, where the author claims to have established a direct connection between Fermat and synthetic differential geometry, on the basis of having been able to convince G. Rejes, during a talk she had with him about Fermat's work, to name a particular axiom of one possible formulation of the theory after Fermat.”

I haven't read it, but Bell later published The Continuous and the Infinitesimal in Mathematics and Philosophy where he discusses the history of infinitesimals, perhaps in an attempt to correct the unbalanced account in his Primer of Infinitesimal Analysis.
–
François G. Dorais♦Mar 20 '13 at 16:16

1

I do not believe John goes far toward correcting the misconception in his book "The Continuous and the Infinitesimal in Mathematics and Philosophy." Moreover, he continues to perpetuate the misconception in the Second Edition of his Primer, which came out three years after the just-named book. For my review of John's "The Continuous ...," see The Bulletin of Symbolic Logic 13 (2007), no. 3, pp. 361-363.-Philip Ehrlich
–
Philip EhrlichMar 20 '13 at 17:12

The 1990 book "Fermat's method: its status and diffusion" by Cifoletti (reviewed here: ams.org/mathscinet-getitem?mr=1160157) arguably belongs to "serious writings on the history of the calculus that supports the view" that Smooth Differential Geometry (SDG) of Lawvere and others is a plausible formalisation of the 17th century work of Fermat.
–
katzApr 2 '13 at 8:47

I do not know whether the early calculus was consistent, but it surely can be made as consistent as modern mathematics, with practically no modifications of the basic setup. This goes under the name Synthetic differential geometry (SDG). Like Robinson's nonstandard analysis it is a calculus with infinitesimals. SDG should be closer to the 17th century ways of doing things because it works with nilpotent infinitesimals whereas nonstandard analysis does not. I believe the 17th century calculus used nilpotent infinitesimals. Can someone confirm this?

[Edit: many thanks to Lee Mosher for transcribing a piece of Berkeley's text. Here is the same piece of text, as it would be written in SDG in the 21st century.]

We would like to compute the derivative of $f(x) = x^n$ where $n$ is a positive integer.
Let $x \in R$ and let $o$ be any nilpotent infinitesimal of degree 2. Then by the Binomial theorem
$$(x + o)^n = x^n + n o x^{n-1} + \frac{n^2 - n}{2} o^2 x^{n-2} + \text{etc}.$$
Because $o$ is nilpotent of degree 2, we have $o^2 = 0$ and so all terms but the first two equal zero. Thus we get
$$(x + o)^n = x^n + n o x^{n-1}$$
hence
$$(x + o)^n - x^n = n o x^{n-1}$$
or
$$f(x + o) - f(x) = n x^{n-1} o$$
Because $o$ here is an arbitrary infinitesimal (i.e., the equation holds for all $o$ whose square iz zero), we may use the Axiom of Microaffinity to conclude that
$$f'(x) o = n x^{n-1} o$$
Now we use the Cancelation Principle to cancel $o$ on both sides, which yields $f'(x) = n x^{n-1}$.

I must say Berkeley's writting was a great deal more picturesque. The Axiom of Microaffinity and the Cancelation Principle are an axiom and a theorem of SDG, respectively. They circumvent the problem that Berkeley was complaining about, namely that first we pretend that $o$ is not zero (so that we can cancel it on both sides of equation), but then we pretend it is zero so that all those higher terms disappear. Instead, we can do the following: assume that $o^2 = 0$ (which does not imply that $o = 0$ because we are not assuming classical logic) so that the higher terms disappear, but then use a sort of weak cancelation property of infinitesimals which allows us to cancel them under certain conditions, even though they are not invertible.

Axiom of Microaffinity: For every $f : R \to R$ and $x \in R$ there exists a unique number $f'(x)$, called the derivative of $f$ at $x$, such that for all infinitesimals $o$ we have $f(x + o) - f(x) = f'(x) o$.

There are no infinitesimals which are distinct from zero: $\lnot \exists o \in \Delta, o \neq 0$.

But it is precisely what we need to explain all the confusion about infinitesimals. Remember it this way: potentially there are some non-zero ones (we cannot exclude their existence) but they are all potentially zero (they are so small we cannot distinguish them from zero). Just don't ask yourself whether an infinitesimal is zero and all will be fine.

Here $R$ is the "smooth real line", which is an ordered field. Of course, it does not satisfy the Archimedean axiom, as that would force all infinitesimals to be zero. So it is a different kind of animal than the usual $\mathbb{R}$.

John Bell explained all this in his excellent booklet on Syntehtic Differential Analysis.

But if they write $(x + dx)^2 = x^2 + 2 x dx$ then it follows immediately by basic algebra that $dx^2 = 0$. Why of why didn't they just follow their noses?
–
Andrej BauerMar 20 '13 at 15:27

Newton, for one, did exactly as I said. "Thus, for example, in the case of the fluent $z = x^n$, Newton first forms $\dot{z} + \dot{zo} = (\dot{x} + \dot{xo})^n$, expands the right-hand side using the binomial theorem, subtracts $z = x^n$, divides through by $o$, neglects all terms still containing $o$, and so obtains $\dot{z} = nx^{n−1}\dot{x}." (From section 4 of plato.stanford.edu/entries/continuity )
–
François G. Dorais♦Mar 20 '13 at 15:38

However, Leibniz had a different take: "He also assumed that the $n$th power $(dx)^n$ of a first-order differential was of the same order of magnitude as an $n$th-order differential $d^nx$, in the sense that the quotient $d^nx/(dx)^n$ is a finite quantity." (Same source.)
–
François G. Dorais♦Mar 20 '13 at 15:44

1

It might have been more interesting if L'Hôpital had calculated that $dx\,dy + dy\,dx = 0$, then concluded that $dx\,dy = −dy\,dx$ rather than that $dx\,dy = 0$.
–
Toby BartelsMar 20 '13 at 21:44

As it was noted by Ryan Budney and Lee Mosher and Ben Braun, the word "consistent" cannot be
used here in the sense of modern mathematical logic. So one cannot investigate this question
rigorously. But one can apply the word "consistent" with its everyday (fuzzy) sense,
meaning "free of contradictions". Then the answer is to some extent a question of opinion.

Myself I side with the opinion of Peter Vickers: early calculus was consistent. It was not
worse than arguments in most other hard sciences (physics, chemistry). But perhaps not
on the level of rigor of Mathematics.

On the other hand, what Berkeley says "a most inconsistent way of arguing, and such as would not be allowed in Divinity" sounds ridiculous to me. "Divinity" is a pseudo-science which deals with
the things that do not exist; thus what is "allowed" in Divinity or not allowed
is completely a matter of opinion. Divinity cannot be compared with other hard sciences,
while mathematics of Newton (or Leibnitz or Euler) can.

"Divinity" for Berkeley in this sentence has the sense of whatever makes that each's person's sensations are consistent with each other's. Call it "matter" if you prefer.
–
JoëlMar 19 '13 at 23:20

6

Divinity may be a study of things that do not exist (although obviously Berkeley himself believed otherwise), but that goes into the premises, which have always been very questionable. But the mode of reasoning was by scholastic logic, which we would regard even today as perfectly rigorous.
–
Toby BartelsMar 20 '13 at 5:01

5

If we identify "Divinity" with theology, then no theologian I know disputes that it is not a science in the modern sense of the empirical sciences. But then, neither is mathematics. They will however claim, or some will, that it is a science in the Aristotelian sense. As far as dealing "with the things that do not exist", your opinion is duly noted, but unless you are a Platonist, mathematicians inevitably deal with objects with no extra-mental existence. And since mathematical objects are abstract, not localized in space-time, empirical considerations are of little avail to him.
–
G. RodriguesMar 20 '13 at 15:22

G. Rodrigues: 1. Yes, mathematics is not a science like physics or geology, but it is on the "opposite side" from theology. In the sense that it is more consistent, and the truth proved mathematically has somehow higher status than a scientific truth. (In what sense statement of theology/divinity can be considered true, I don't know. Or what is the criterion of truth in theology:-) 2. Yes, I am a strong Platonist. And frankly speaking it is hard for me to imagine how a working mathematician can be something else, though I know that some mathematicians profess other views.
–
Alexandre EremenkoMar 21 '13 at 0:31

1

I would question the claim that "the word 'consistent' cannot be used here in the sense of modern mathematical logic". Berkeley claimed that the calculus was based on the inconsistency $(dx\not=0)\wedge(dx=0)$. Arguably this is the same meaning as in modern logic.
–
katzMar 23 '13 at 21:14

I think that the question is sufficiently precise if we think at a realistic meaning of the word “inconsistent”. Also nowadays, for non logicians the adjective “inconsistent” doesn't really mean “free of contradictions” (this is only the obvious meaning given by modern Mathematical Logic), but rather it means not acceptable by a large or important part of the scientific community.

Also nowadays, some of our works in some parts of modern Mathematics are not accepted as sufficiently rigorous by other parts. These works are hence perceived only as not sufficiently precise “ways of arguing”. Therefore, these “foreign argumentations” are perceived as potentially inconsistent, and need a different reformulation to be accepted. I know of relationships of this type between some parts of Geometry and Analysis, to mention only an example. It is the same problem occurring in the relationships between (some parts of) Physics and Mathematics because these two disciplines are really completely different “games”: in Physics the most important achievement is the existence of a dialectic between formulas and a part of nature, even if the related Mathematics lacks in formal clarity and is hence not accepted by several mathematicians.

Analogously, early calculus was consistent until the community accepted these “ways of arguing” and discovered statements which could be verified as true by a dialogue with other part of knowledge: Physics and geometrical intuition in primis.

Since in the early calculus the formal intuition (in the modern sense of manipulation of symbols, without a reference to intuition) was surely weak, the dialectic between proofs and intuition was surely stronger (I mean statistically, in the distribution of 17th century mathematicians). In my opinion, this is the reason of the discovering of true statements, even if the related proofs are perceived as “weak” nowadays. Once the great triumvirate Cantor, Dedekind, and Weierstrass decided that it was time to make a step further, the notion of “inconsistent” changed for this important part of the community and hence, sooner or later, for all the others.

Also from the point of view of rules of inference, the consistency of early calculus has to be meant in the sense of dialectic between different parts of knowledge and acceptance by the related scientific community.

Therefore, in this sense, in my opinion early calculus is as consistent as our (and the future) calculus.

I agree with Joel that “we are not in a qualitatively different situation”: probably in the near future all proofs will be computer assisted, in the sense that all the missing steps will be checked by a computer (whose software will be verified, once again, by a large part of the community) and we will only need to provide the main steps. Necessarily, articles will change in nature and, I hope, they will be more focused on those ideas and intuitions thanks to which we were able to create the results we are presenting. Therefore, young students in the future will probably read disgusted at our papers saying: “how were they able to understand how all these results were created? These papers seems like phone books: def, lem, thm, cor, def, lem, thm, cor... without any explanation of discovery rules and several missing formal steps!”.

Finally, I think that only formally, but not conceptually, this early calculus may look similar to NSA or SDG. In my opinion, one of the main reason of the lack of diffusion of NSA is that its techniques are perceived as “voodoo” by all modern mathematicians (the majority) that rely their work on the dialogue between formal mathematics and informal intuition. Too much frequently the lack of intuition is too strong in both theories. For example, for a person like Cauchy, what is the intuitive meaning of the standard part of the sine of an infinite number (NSA)? For people like Bernoulli, what is the intuitive meaning of properties like $x\le0$
and $x\ge0$
for every infinitesimal and $\neg\neg\exists h$
such that $h$
is infinitesimal (but not necessarily there exists an infinitesimal; SDG)? Moreover, as soon as discontinuous functions appeared in the calculus, the natural reactions of almost every working mathematicians (of 17th century and nowadays) looking at the microaffinity axiom is not to change Logic switching to the intuitionistic one, but to change this axiom inserting a restriction on the quantifier “for every $f:R\longrightarrow R$”.

The apparently inconsistent argumentation of setting $h\ne0$
and finally $h=0$, can be faithfully formalized using classical calculus rather than using these theories of infinitesimals. We can say that $f:R\longrightarrow R$ (here $R$
is the usual Archimedean real field) is differentiable at $x$
if there exists a function $r:R\times R\longrightarrow R$
such that $f(x+h)=f(x)+h\cdot r(x,h)$
and such that $r$
is continuous at $h=0$. It is easy to prove that this function $r$
is unique. Therefore, we can assume $h\ne0$, we can make freely calculations to discover what is the unique form of the function $r(x,h)$ for $h\ne0$ and, in the final formula, to set $h=0$ because $r$ is clearly continuous for all the examples of functions of the early calculus. This is called the Fermat-Reyes methods, and it can be proved also for generalized functions like Schwartz distributions (and hence for an isomorphic copy of the space of all the continuous functions). Moreover, in my opinion, both Cauchy and Bernoulli would had perfectly understood this method and the related intuition. On the contrary, they would not be able to understand all the intuitive inconsistencies they can easily find both in NSA and SDG.

I don't agree with the opening paragraph of this answer. "Consistent" is a perfectly clear English word that means, in this context, "not self-contradictory". If a person is inconsistent, they are not saying or doing the same things over time.
–
arsmathApr 2 '13 at 21:04

I would agree with Alexandre Eremenko's answer. The early calculus in fact was not inconsistent, as elaborated below.

Joël's answer is based on a premise that "the question is not precise enough to get a definite answer", and "does not make real sense, because 'arguments' are not results". This premise is historically incorrect. In fact, in the historical literature the claim of inconsistency of the early calculus is very specific and precise. It is routinely based on Berkeley's analysis of the typical calculations such as that of the derivative of a power, or the derivative of the product of two functions. The alleged inconsistency is presented as follows. Berkeley claims that (1) $dx$ is nonzero at the start of the calculation; (2) $dx$ is assumed to be zero at its conclusion; (3) in any consistent reasoning, $dx$ cannot be simultaneously zero and nonzero; (4) therefore the procedures of the calculus were inconsistent, Q.E.D. In modern notation, this amounts to a claimed inconsistency of the type $(dx\not=0)\wedge(dx=0)$.

Berkeley, however, did not read Leibniz carefully enough. Leibniz explicitly and repeatedly clarifies that he is working with a generalized notion of equality, where expressions equal up to a negligible term are also held to be equal. In modern terminology, this means that Leibniz is working with a binary relation which is not equality on the nose, but rather approximate equality in a suitable sense. It is in this sense that Leibniz writes formulas like $2x+dx=2x$ (note that he did not use our "=" symbol). Leibniz might not have been "rigorous" by modern standards, but he was not inconsistent, either. In fact, Leibniz's procedures were more soundly based than Berkeley's criticisms thereof. Philosopher David Sherry and I presented our analysis last year in the Notices of the AMS at http://www.ams.org/notices/201211/

Felix Klein wrote in 1908 that there were in fact not one but two separate tracks for the development of analysis: (A) the Weierstrassian one, in the context of an Archimedean continuum; and (B) the track exploiting indivisibles and/or infinitesimals. The B-track was eventually popularized by Abraham Robinson.

Everybody is familiar with the great accomplishment of Weierstrass in developing rigorous foundations for analysis, which is beyond dispute. However, historian Carl Boyer (and many others), in describing Cantor, Dedekind, and Weierstrass as "the great triumvirate", adds an anti-infinitesimal spin to their accomplishment. Namely, the traditional historical literature seeks to couple their "rigorous" accomplishment to the elimination of "inconsistent" infinitesimals, as if pursuing the A-track depended on the elimination of the B-track (Dunham's "rotten foundations"). It is the coupling of Weierstrass's accomplishment to an ill-informed critique of infinitesimals (both classical and modern) that constitutes the historical misconception pointed out by Vickers and others.

I have mixed feelings about this entire thread. On the one hand, I am now aware of various papers written by katz et al. I have previously read a lot of Dunham, Grabiner, etc, and it is good to see a counterpoint to their position. However, I feel that the question was posed by katz merely as an excuse to give the above answer (with a personal citation). katz's previously closed post supports this. I find this frustrating, and it dampens my interest in katz's papers. If this is the case, I hope katz will take a different approach in the future. If not, I'll be glad to hear otherwise.
–
Ben BraunMar 22 '13 at 17:02

@Ben Brown: As you point out (with sources), the received historical scholarship views the early calculus as inconsistent. There is an opposing minority view, including the article by Vickers. A sought to formulate my question in a balanced way as I don't happen to believe that I have a monopoly on historical truth. The question led to a fruitful discussion as evidenced by the 7 answers given. The issue of (in)consistency of the early calculus is merely the tip of the iceberg; many other issues were raised at...
–
katzMar 23 '13 at 20:11

The completeness of the real number implies that there are no infinitesimals. If $\epsilon$ is infinitesimal, then $n\epsilon<1$ for all $n\in \mathbb N$. This bounded increasing sequence has no least upper bound, although it should by completeness.

In the form of Archimedes' axiom, completeness has been a part of mathematics since ancient times. Archimedes himself used it to solve some problems of calculus. I always thought that Berkeley spotted this inconsistency and rightfully complained about it.

From what I read, Leibniz was well aware that if infinitesimals were real then the Archimedean property would fail. It appears that he resolved this by thinking of infinitesimals as variable quantities rather than constant quantities. His followers had a variety of different views. For example, L'Hôpital maintained that two quantities that differ by an infinitesimal amount are indistinguishable. In that case, $n\epsilon$ is not increasing and therefore does not contradict the completeness of the real numbers. Johann Bernoulli, on the other hand, believed that infinitesimals are very real.
–
François G. Dorais♦Mar 20 '13 at 16:01

1

@Wouter Stekelenburg: The notion that what "Berkeley had in mind" was our "complete Archimedean continuum" is a historical misconception. Just the opposite: Berkeley fiercely opposed the idea of an indefinitely divisible continuum that we take for granted today. In line with his empiricist philosophy, Berkeley postulated an empirical "minimum" M below which nothing meaningful can exist: sort of "if you can't see it, it can't exist". His opposition to a divisible continuum and his opposition to infinitesimals were made of the same cloth.
–
katzMar 21 '13 at 13:10

1

The main person pushing this view of the continuum today seems to be Doron Zeilberger. It is a sort of ultrafinitism.
–
Toby BartelsMar 23 '13 at 5:28