Why Do We Study Calculus?or,
a brief look at some of the history of mathematics

The question I am asked most often is, "why do we study this?"
(or its variant, "will this be on the exam?"). Indeed, it's not
immediately obvious how some of the stuff we're studying
will be of any use to the students. Though some of them
will eventually use calculus in their
work in physics, chemistry, or economics, almost none
of those people will ever need prove anything
about calculus. They're willing to
trust the pure mathematicians whose
job it is to certify the reliability of the theorems.
Why, then, do we study epsilons and deltas, and
all these other abstract concepts of proofs?

Well, calculus is not a just vocational training course. In part,
students should study calculus for the same reasons that they study
Darwin, Marx, Voltaire, or Dostoyevsky: These ideas are a basic part
of our culture; these ideas have shaped how we perceive the world and
how we perceive our place in the world. To understand how that is true
of calculus, we must put calculus into a historical perspective; we
must contrast the world before calculus with the world after calculus.
(Probably we should put more history into our calculus courses.
Indeed, there
is a growing movement among mathematics teachers to do precisely that.)

The earliest mathematics was perhaps the arithmetic of commerce:
If 1 cow is worth 3 goats, how much does 4 cows cost? Geometry
grew from the surveying of real estate. And so on; math was
useful and it grew.

The ancient Greeks did a great
deal of clever thinking, but very few experiments; this led to
some errors. For instance, Aristotle observed that a rock falls
faster than a feather, and concluded that heavier objects fall
faster than lighter objects. Aristotle's views persisted for
centuries, until the discovery of air resistance.

The most dramatic part of the story of calculus
comes with astronomy. People studied
and tried to predict things that were out of human reach and
apparently beyond human control. "Fear not this dark and cold,"
they would say;
"warm times will come again. The seasons are a cycle. The time
from the beginning of one planting season to the beginning of
the next planting season is almost 13 cycles of the moon --
almost 13 cycles of the blood of fertility." The gods who lived
in the heavens were cruel and arbitrary -- too much rain or too
little rain could mean famine.

The earth was the center of the universe. Each day, the sun rose
in the east and set in the west. Each night, the constellations
of stars rose in the east and set in the west. The stars were
fixed in position, relative to each other, except for a handful
of "wanderers," or "planets". The motions of these
planets were extremely erratic and complicated.
Astrologers kept careful records of the motions of
the planets, so as to predict their future motions and
(hopefully) their effects on humans.

In 1543 Copernicus published his observations that the
motions of the planets could be explained more simply by
assuming that the planets move around the sun, rather than
around the earth -- and that the earth moves around the
sun too; it is just another planet.
This makes the planets' orbits approximately
circular. The church did not like this idea, which made earth
less important and detracted from the idea of humans as
God's central creation.

During the years 1580-1597, Brahe and his assistant Kepler made many
accurate observations of the planets. Based on these observations, in
1596 Kepler published his refinement of Copernicus's ideas. Kepler
showed that the movements of the planets are described more accurately
by ellipses, rather than circles. Kepler gave three "laws" that
described, very simply and accurately, many aspects of planetary
motion:

the orbits are ellipses, with the sun at one focus

the velocity of a planet varies in such a way that the
area swept out by the line between planet and sun is increasing
at a constant rate

the square of the orbital period of a planet is proportional
to the cube of the planet's average distance from the sun.

The few people who understood geometry could see that Kepler
had uncovered some very basic truths. This bore out an earlier
statement of Plato: "God eternally geometrizes."

In 1609 Galileo took a "spyglass" -- a popular toy of the time -- and
used it as a telescope to observe the heavens. He discovered many
celestial bodies that could not be seen with the naked eye. The moons
of Jupiter clearly went around Jupiter; this gave very clear and simple
evidence supporting Copernicus's idea
that not everything goes around the earth. The church
punished Galileo, but his ideas, once released to the world,
could not be halted.

Galileo also began experiments to measure the effects of
gravity; his ideas on this subject would later influence
astronomy too. He realized that Aristotle was wrong --
that heavier objects do not fall faster than light ones.
He established this by making
careful measurements of the
times that it took balls of
different sizes to roll down ramps. There is a story that
Galileo dropped objects of different sizes off the Leaning Tower
of Pisa, but it is not clear that this really happened. However,
we can easily run a "thought-experiment" to see what would happen
in such a drop. If we describe things in the right way, we can
figure out the results:

Drop 3 identical 10-pound weights off the tower; all three will
hit the ground simultaneously. Now try it again, but first
connect two of the three weights with a short piece of
thread; this has no effect, and the three weights still hit the
ground simultaneously. Now try it again, but instead of thread,
use superglue; the three weights will still hit the ground
simultaneously. But if the superglue has dried, we see that we
no longer have three 10-pound weights; rather, we have a 10-pound
weight and a 20-pound weight.

Some of the most rudimentary ideas of calculus had been around for
centuries, but it took Newton and Leibniz to put the ideas together.
Independently of each other, around the same time, those two men
discovered the Fundamental Theorem of Calculus, which states that
integrals (areas) are the same thing as antiderivatives. Though Newton
and Leibniz generally share credit for "inventing" calculus, Newton
went much further in its applications. A derivative is a rate of
change, and everything in the world changes as time passes, so
derivatives can be very useful. In 1687 Newton published his
"three laws of motion," now known as "Newtonian mechanics"; these laws
became the basis of physics.

If no forces (not even gravity or friction)
are acting on an object, it will continue to move with constant
velocity -- i.e., constant speed and direction. (In particular, if
it is sitting still, it will remain so.)

The force acting on an object is equal to its mass times
its acceleration.

The forces that two objects exert on each other must be
equal in magnitude and opposite in direction.

To explain planetary motion, Newton's basic laws
must be combined with his law of gravitation:

the gravitational attraction between two
bodies is directly proportional to the product
of the masses of the two bodies and inversely
proportional to the square of the distance between them.

Newton's laws
were simpler and more intuitive as Kepler's, but they yielded Kepler's
laws as corollaries, i.e., as logical consequences.

Newton's universe is sometimes described
as a "clockwork universe," predictable and perhaps even
deterministic. We can predict how billiard balls will move
after a collision. In principle we can predict everything else
in the same fashion; a planet acts a little like a billiard
ball.

(Our everyday experiences are less predictable, because
they involve trillions of trillions of tiny little billiard balls
that we call "atoms". But all the atoms in a planet
stay near each other due to gravity, and combine
to act much like one big billiard ball; thus the planets
are more predictable.)

Suddenly the complicated movements of the heavens were revealed
as consequences of very simple mathematical principles. This gave
humans new confidence in their ability to understand -- and
ultimately, to control -- the world around them. No longer were
they mere subjects of incomprehensible forces. The works of
Kepler and Newton changed not just astronomy, but the way that
people viewed their relation to the universe. A new age
began, commonly known as the "Age of Enlightenment"; philosophers
such as Voltaire and Rousseau wrote about the power of reason
and the dignity of humans. Surely this new viewpoint contributed to

portable accurate timepieces, developed over the next
couple of centuries, increasing the feasibility of overseas
navigation and hence overseas commerce

the steam engine, developed over the next century,
making possible
the industrial revolution

the overthrow of "divine-right" monarchies, in America (1776)
and France (1789).

Perhaps Newton's greatest discovery, however, was
this fact about knowledge in general, which is mentioned less often: The
fact that a partial explanation can be useful and meaningful.
Newton's laws of motion did not fully explain gravity.
Newton described how much gravity there is, with mathematical
preciseness, but he did not explain what causes gravity. Are
there some sort of "invisible wires" connecting each two objects
in the universe and pulling them toward each other? Apparently
not. How gravity works is understood a little better nowadays,
but Newton had no understanding of it whatsoever. So
when Newton formulated his law of gravity, he was also implicitly
formulating a new principle of epistemology (i.e., of how we know
things): we do not need to have a complete explanation of
something, in order to have useful (predictive) information
about it. That principle revolutionized science and technology.

That principle can be seen in the calculus itself. Newton and
Leibniz knew how to correctly give the derivatives of most
common functions, but they did not have a precise definition of
"derivative"; they could not actually prove the theorems that
they were using. Their descriptions were not explanations. They
explained a derivative as a quotient of two infinitesimals
(i.e., infinitely small but nonzero numbers). This explanation
didn't really make much sense to mathematicians of that time;
but it was clear that the computational methods of Newton and
Leibniz were getting the right answers, regardless of their
explanations. Over the next couple of hundred years, other
mathematicians -- particularly Weierstrass and Cauchy --
provided better explanations (epsilons and deltas) for those
same computational methods.

It may be interesting to note that, in 1960, logician Abraham Robinson
finally found a way to make sense of infinitesimals. This led to a new
branch of mathematics, called nonstandard analysis. Its
devotees claim that it gives better intuition for calculus,
differential equations, and related subjects; it yields the same kinds
of insights that Newton and Leibniz originally had in mind. Ultimately,
the biggest difference between the infinitesimal approach and
the epsilon-delta approach is in what kind of language you use
to hide the quantifiers:

The numbers epsilon and delta are "ordinary-sized", in the sense that they are not infinitely small. They are moderately small, e.g., numbers like one billionth. We look at what happens when we vary these numbers and make them smaller. In effect, these numbers are changing, so there is motion or action in our description. We can make these numbers smaller than any ordinary positive number that has been chosen in advance.

The approach of Newton, Leibniz, and Robinson involves numbers that do not need to change, because the numbers are infinitesimals -- i.e., they are already smaller than any ordinary positive number. But one of the modern ways to represent an infinitesimal is with a sequence of ordinary numbers that keep getting smaller and smaller as we go farther out in the sequence.

To a large extent, mathematics -- or any kind of abstract reasoning --
works by selectively suppressing information. We choose a notation or terminology
that hides the information we're not currently concerned with, and focuses our attention
on the aspects that we currently want to vary and study. The epsilon-delta approach
and the infinitesimal approach differ only slightly in how they carry out this suppression.

A college
calculus book based on the infintesimal approach was published by Keisler in 1986.
However, it did not catch on. I suspect the reason it didn't catch on was
simply because the ideas in it were too unfamiliar to most of the teachers
of calculus. Actually, most of the unfamiliar ideas were relegated to an appendix;
the new material that was really central to the book was quite small.

Yet another chapter is still unfolding in the
interplay between mathematics and astronomy: We are
working out what is the shape of the universe. To understand
that question, let us first consider the shape of the planet.
On its surface, the earth looks mostly flat, with a few local
variations such as mountains. But if you went off in one
direction, traveling in what seemed a straight line, sometimes
by foot and sometimes by boat, you'd eventually arrive
back where you started, because the earth is round.
Magellan confirmed this by sailing around the world,
and astronauts confirmed this with photographs in the
1960's. But
the radius of the earth is large (4000 miles), and so the
curvature of the two-dimensional surface is too slight to be
evident to a casual observer.

In an analogous fashion, our
entire universe, which we perceive as three-dimensional, may have
a slight curvature; this question was raised a couple of hundred
years ago when Gauss and Riemann came to understand non-Euclidean
geometries. If you take off in a rocketship and travel in what
seems a straight line, will you eventually return to where you
began? The curvature of the physical universe is too slight
to be detected by any instruments we have yet devised.
Astronomers hope to detect it, and deduce the shape of the
universe, with more powerful telescopes that
are being built even now.

Human understanding of the universe has gradually increased over the
centuries. One of the most dramatic events was in the late 19th
century, when Georg Cantor "tamed" infinity and took it away from
the theologians, making it a secular concept with its own
arithmetic. We may still have a use for theologians, since we do
not yet fully understand the human spirit; but infinity is
no longer a good metaphor for that which transcends our everyday
experience.

Cantor was studying the convergence of Fourier
series and was led to consider the relative sizes of certain
infinite subsets of the real line. Earlier mathematicians had been
bewildered by the fact that an infinite set could have "the same
number of elements" as some proper subset. For instance, there
is a one-to-one correspondence between the natural numbers

1,
2,
3,
4,
5,
...

and the even natural numbers

2,
4,
6,
8,
10,
...

But this did not stop Cantor. He said that two sets "have the
same cardinality" if there exists a one-to-one correspondence
between them; for instance, the two sets above have
the same cardinality.
He showed that it is possible to arrange the rational
numbers into a table (for simplicity, we'll consider just the
positive rational numbers):

1/1

1/2

1/3

1/4

...

2/1

2/2

2/3

2/4

...

3/1

3/2

3/3

3/4

...

4/1

4/2

4/3

4/4

...

...

...

...

...

...

Following along successive diagonals, we obtain a list:

1/1, 1/2, 2/1, 1/3, 2/2, 3/1, 1/4, 2/3, 3/2, 4/1, 1/5, ...

This shows that the set of all ordered pairs of positive integers
is
countable -- i.e., it can be arranged into a list; it has
the
same cardinality as the set of positive integers. Now,
run through
the list, crossing out any fraction that is a repetition of a
previous fraction
(e.g., 2/2 is a repetition of 1/1). This leaves a slightly
"shorter" (but still infinite) list

1/1, 1/2, 2/1, 1/3, 3/1, 1/4, 2/3, 3/2, 4/1, 1/5, ...

containing each positive rational number exactly once. Thus the
set of positive rational numbers is countable. A similar
argument with a slightly more complicated diagram shows that the
set of all rational numbers is also countable.
However, by a different argument
(not given here), Cantor showed that the
real numbers cannot be put into a list -- thus the
real numbers are uncountable. Cantor showed that there are
even bigger sets (e.g., the set of all subsets of the reals); in
fact, there are infinitely many different infinities.

As proof techniques improved, gradually mathematics became more
rigorous, more reliable, more certain. Today our standards of rigor
are extremely high, and we perceive mathematics as a collection of
"immortal truths," arrived at by pure reason, not even dependent on
physical observations. We have developed a mathematical language
which permits us to formulate each step in our reasoning with complete
certainty; then the conclusion is certain as well.
However, it must be admitted that modern mathematics
has become detached from the physical world. As Einstein said,

As far as the laws of mathematics refer to reality, they are not
certain; and as far as they are certain, they do not refer to reality.

For instance, use a pencil to draw a line segment on a piece of
paper, perhaps an inch long. Label one end of it "0" and the
other end of it "1," and label a few more points in between. The
line segment represents the interval [0,1], which (at least, in
our minds) has uncountably many members. But in what sense does
that uncountable set exist? There are only finitely many
graphite molecules marking the paper, and there are only finitely
many (or perhaps countably many) atoms in the entire physical
universe in which we live. An uncountable set of points is easy
to imagine mathematically, but it does not exist anywhere in the
physical universe. Is it merely a figment of our imagination?

It may be our imagination, but "merely" is not the right word.
Our purely mental number system has proved useful for practical
purposes in the real world. It has provided our best explanation
so far for numerical quantities. That explanation has made
possible radio, television, and many other technological
achievements --- even a journey from the earth to the moon and
back again. Evidently we are doing something right; mathematics
cannot be dismissed as a mere dream.

The "Age of Enlightenment" may have reached its greatest heights
in the early 20th century, when Hilbert tried to put all of
mathematics on a firm and formal foundation. That age may have
ended in the 1930's, when Gödel showed that Hilbert's program
cannot be carried out; Gödel discovered that even the language of
mathematics has certain inherent limitations. Gödel proved that,
in a sense, some things cannot be proved. Even a mathematician
must accept some things on faith or learn to live with uncertainty.

Some of the ideas developed in this essay are based on the book
Mathematics: The Loss of Certainty, by Morris Kline. I
enjoyed reading that book very much, but I should mention that I
disagreed with its ending. Kline suggests that Gödel's
discovery has led to a general disillusionment with mathematics,
a disillusionment that has spread through our culture (just as
Newton's successes spread earlier). I disagree with Kline's
pessimism. Mathematics may have some limitations, but in our
human experience we seldom bump into those limitations. Gödel's
theorem in no way invalidates Newton, Cantor, or the moon trip.
Mathematics remains a miraculous device for seeing the world more
clearly.

This web page has been selected as one
of the best educational resources on the Web, and has received
the coveted "StudyWeb Excellence Award," from
StudyWeb.
For other web pages about Math History, see
StudyWeb's list.