Analysis, a branch of mathematics that deals with continuous change and with certain general types of processes that have emerged from the study of continuous change, such as limits, differentiation, and integration. Since the discovery of the differential and integral calculus by Isaac Newton and Gottfried Wilhelm Leibniz at the end of the 17th century, analysis has grown into an enormous and central field of mathematical research, with applications throughout the sciences and in areas such as finance, economics, and sociology.

The historical origins of analysis can be found in attempts to calculate spatial quantities such as the length of a curved line or the area enclosed by a curve. These problems can be stated purely as questions of mathematical technique, but they have a far wider importance because they possess a broad variety of interpretations in the physical world. The area inside a curve, for instance, is of direct interest in land measurement: how many acres does an irregularly shaped plot of land contain? But the same technique also determines the mass of a uniform sheet of material bounded by some chosen curve or the quantity of paint needed to cover an irregularly shaped surface. Less obviously, these techniques can be used to find the total distance traveled by a vehicle moving at varying speeds, the depth at which a ship will float when placed in the sea, or the total fuel consumption of a rocket.

Similarly, the mathematical technique for finding a tangent line to a curve at a given point can also be used to calculate the steepness of a curved hill or the angle through which a moving boat must turn to avoid a collision. Less directly, it is related to the extremely important question of the calculation of instantaneous velocity or other instantaneous rates of change, such as the cooling of a warm object in a cold room or the propagation of a disease organism through a human population.

This article begins with a brief introduction to the historical background of analysis and to basic concepts such as number systems, functions, continuity, infinite series, and limits, all of which are necessary for an understanding of analysis. Following this introduction is a full technical review, from calculus to nonstandard analysis, and then the article concludes with a complete history.

Historical background

Bridging the gap between arithmetic and geometry

Mathematics divides phenomena into two broad classes, discrete and continuous, historically corresponding to the division between arithmetic and geometry. Discrete systems can be subdivided only so far, and they can be described in terms of whole numbers 0, 1, 2, 3, …. Continuous systems can be subdivided indefinitely, and their description requires the real numbers, numbers represented by decimal expansions such as 3.14159…, possibly going on forever. Understanding the true nature of such infinite decimals lies at the heart of analysis.

The distinction between discrete mathematics and continuous mathematics is a central issue for mathematical modeling, the art of representing features of the natural world in mathematical form. The universe does not contain or consist of actual mathematical objects, but many aspects of the universe closely resemble mathematical concepts. For example, the number two does not exist as a physical object, but it does describe an important feature of such things as human twins and binary stars. In a similar manner, the real numbers provide satisfactory models for a variety of phenomena, even though no physical quantity can be measured accurately to more than a dozen or so decimal places. It is not the values of infinitely many decimal places that apply to the real world but the deductive structures that they embody and enable.

Analysis came into being because many aspects of the natural world can profitably be considered as being continuous—at least, to an excellent degree of approximation. Again, this is a question of modeling, not of reality. Matter is not truly continuous; if matter is subdivided into sufficiently small pieces, then indivisible components, or atoms, will appear. But atoms are extremely small, and, for most applications, treating matter as though it were a continuum introduces negligible error while greatly simplifying the computations. For example, continuum modeling is standard engineering practice when studying the flow of fluids such as air or water, the bending of elastic materials, the distribution or flow of electric current, and the flow of heat.

Two major steps led to the creation of analysis. The first was the discovery of the surprising relationship, known as the fundamental theorem of calculus, between spatial problems involving the calculation of some total size or value, such as length, area, or volume (integration), and problems involving rates of change, such as slopes of tangents and velocities (differentiation). Credit for the independent discovery, about 1670, of the fundamental theorem of calculus together with the invention of techniques to apply this theorem goes jointly to Gottfried Wilhelm Leibniz and Isaac Newton.

While the utility of calculus in explaining physical phenomena was immediately apparent, its use of infinity in calculations (through the decomposition of curves, geometric bodies, and physical motions into infinitely many small parts) generated widespread unease. In particular, the Anglican bishop George Berkeley published a famous pamphlet, The Analyst; or, A Discourse Addressed to an Infidel Mathematician (1734), pointing out that calculus—at least, as presented by Newton and Leibniz—possessed serious logical flaws. Analysis grew out of the resulting painstakingly close examination of previously loosely defined concepts such as function and limit.

Newton’s and Leibniz’s approach to calculus had been primarily geometric, involving ratios with “almost zero” divisors—Newton’s “fluxions” and Leibniz’s “infinitesimals.” During the 18th century calculus became increasingly algebraic, as mathematicians—most notably the Swiss Leonhard Euler and the Italian French Joseph-Louis Lagrange—began to generalize the concepts of continuity and limits from geometric curves and bodies to more abstract algebraic functions and began to extend these ideas to complex numbers. Although these developments were not entirely satisfactory from a foundational standpoint, they were fundamental to the eventual refinement of a rigorous basis for calculus by the Frenchman Augustin-Louis Cauchy, the Bohemian Bernhard Bolzano, and above all the German Karl Weierstrass in the 19th century.

Technical preliminaries

Numbers and functions

Number systems

Throughout this article are references to a variety of number systems—that is, collections of mathematical objects (numbers) that can be operated on by some or all of the standard operations of arithmetic: addition, multiplication, subtraction, and division. Such systems have a variety of technical names (e.g., group, ring, field) that are not employed here. This article shall, however, indicate which operations are applicable in the main systems of interest. These main number systems are:

a. The natural numbersN. These numbers are the positive (and zero) whole numbers 0, 1, 2, 3, 4, 5, …. If two such numbers are added or multiplied, the result is again a natural number.

b. The integersZ. These numbers are the positive and negative whole numbers …, −5, −4, −3, −2, −1, 0, 1, 2, 3, 4, 5, …. If two such numbers are added, subtracted, or multiplied, the result is again an integer.

c. The rational numbersQ. These numbers are the positive and negative fractions p/q where p and q are integers and q ≠ 0. If two such numbers are added, subtracted, multiplied, or divided (except by 0), the result is again a rational number.

d. The real numbersR. These numbers are the positive and negative infinite decimals (including terminating decimals that can be considered as having an infinite sequence of zeros on the end). If two such numbers are added, subtracted, multiplied, or divided (except by 0), the result is again a real number.

e. The complex numbersC. These numbers are of the form x + iy where x and y are real numbers and i = √(−1). (For further explanation, see the section Complex analysis.) If two such numbers are added, subtracted, multiplied, or divided (except by 0), the result is again a complex number.

Functions

In simple terms, a functionf is a mathematical rule that assigns to a number x (in some number system and possibly with certain limitations on its value) another number f(x). For example, the function “square” assigns to each number x its square x2. Note that it is the general rule, not specific values, that constitutes the function.

The common functions that arise in analysis are usually definable by formulas, such as f(x) = x2. They include the trigonometric functions sin (x), cos (x), tan (x), and so on; the logarithmic function log (x); the exponential function exp (x) or ex (where e = 2.71828… is a special constant called the base of natural logarithms); and the square root function √x. However, functions need not be defined by single formulas (indeed by any formulas). For example, the absolute value function |x| is defined to be x when x ≥ 0 but −x when x < 0 (where ≥ indicates greater than or equal to and < indicates less than).

The problem of continuity

The logical difficulties involved in setting up calculus on a sound basis are all related to one central problem, the notion of continuity. This in turn leads to questions about the meaning of quantities that become infinitely large or infinitely small—concepts riddled with logical pitfalls. For example, a circle of radius r has circumference 2πr and area πr2, where π is the famous constant 3.14159…. Establishing these two properties is not entirely straightforward, although an adequate approach was developed by the geometers of ancient Greece, especially Eudoxus and Archimedes. It is harder than one might expect to show that the circumference of a circle is proportional to its radius and that its area is proportional to the square of its radius. The really difficult problem, though, is to show that the constant of proportionality for the circumference is precisely twice the constant of proportionality for the area—that is, to show that the constant now called π really is the same in both formulas. This boils down to proving a theorem (first proved by Archimedes) that does not mention π explicitly at all: the area of a circle is the same as that of a rectangle, one of whose sides is equal to the circle’s radius and the other to half the circle’s circumference.

Approximations in geometry

A simple geometric argument shows that such an equality must hold to a high degree of approximation. The idea is to slice the circle like a pie, into a large number of equal pieces, and to reassemble the pieces to form an approximate rectangle (seefigure). Then the area of the “rectangle” is closely approximated by its height, which equals the circle’s radius, multiplied by the length of one set of curved sides—which together form one-half of the circle’s circumference. Unfortunately, because of the approximations involved, this argument does not prove the theorem about the area of a circle. Further thought suggests that as the slices get very thin, the error in the approximation becomes very small. But that still does not prove the theorem, for an error, however tiny, remains an error. If it made sense to talk of the slices being infinitesimally thin, however, then the error would disappear altogether, or at least it would become infinitesimal.

Actually, there exist subtle problems with such a construction. It might justifiably be argued that if the slices are infinitesimally thin, then each has zero area; hence, joining them together produces a rectangle with zero total area since 0 + 0 + 0 +⋯ = 0. Indeed, the very idea of an infinitesimal quantity is paradoxical because the only number that is smaller than every positive number is 0 itself.

The same problem shows up in many different guises. When calculating the length of the circumference of a circle, it is attractive to think of the circle as a regular polygon with infinitely many straight sides, each infinitesimally long. (Indeed, a circle is the limiting case for a regular polygon as the number of its sides increases.) But while this picture makes sense for some purposes—illustrating that the circumference is proportional to the radius—for others it makes no sense at all. For example, the “sides” of the infinitely many-sided polygon must have length 0, which implies that the circumference is 0 + 0 + 0 + ⋯ = 0, clearly nonsense.

Infinite series

Similar paradoxes occur in the manipulation of infinite series, such as1/2 + 1/4 + 1/8 +⋯ (1)continuing forever. This particular series is relatively harmless, and its value is precisely 1. To see why this should be so, consider the partial sums formed by stopping after a finite number of terms. The more terms, the closer the partial sum is to 1. It can be made as close to 1 as desired by including enough terms. Moreover, 1 is the only number for which the above statements are true. It therefore makes sense to define the infinite sum to be exactly 1. The figure illustrates this geometric series graphically by repeatedly bisecting a unit square. (Series whose successive terms differ by a common ratio, in this example by 1/2, are known as geometric series.)

The difference between series (1) and (2) is clear from their partial sums. The partial sums of (1) get closer and closer to a single fixed value—namely, 1. The partial sums of (2) alternate between 0 and 1, so that the series never settles down. A series that does settle down to some definite value, as more and more terms are added, is said to converge, and the value to which it converges is known as the limit of the partial sums; all other series are said to diverge.

The limit of a sequence

All the great mathematicians who contributed to the development of calculus had an intuitive concept of limits, but it was only with the work of the German mathematician Karl Weierstrass that a completely satisfactory formal definition of the limit of a sequence was obtained.

Consider a sequence (an) of real numbers, by which is meant an infinite lista0, a1, a2, ….It is said that an converges to (or approaches) the limit a as n tends to infinity, if the following mathematical statement holds true: For every ε > 0, there exists a whole number N such that |an − a| < ε for all n > N. Intuitively, this statement says that, for any chosen degree of approximation (ε), there is some point in the sequence (N) such that, from that point onward (n > N), every number in the sequence (an) approximates a within an error less than the chosen amount (|an − a| < ε). Stated less formally, when n becomes large enough, an can be made as close to a as desired.

For example, consider the sequence in which an = 1/(n + 1), that is, the sequence1, 1/2, 1/3, 1/4, 1/5, …,going on forever. Every number in the sequence is greater than zero, but, the farther along the sequence goes, the closer the numbers get to zero. For example, all terms from the 10th onward are less than or equal to 0.1, all terms from the 100th onward are less than or equal to 0.01, and so on. Terms smaller than 0.000000001, for instance, are found from the 1,000,000,000th term onward. In Weierstrass’s terminology, this sequence converges to its limit 0 as n tends to infinity. The difference |an − 0| can be made smaller than any ε by choosing n sufficiently large. In fact, n > 1/εsuffices. So, in Weierstrass’s formal definition, N is taken to be the smallest integer > 1/ε.

This example brings out several key features of Weierstrass’s idea. First, it does not involve any mystical notion of infinitesimals; all quantities involved are ordinary real numbers. Second, it is precise; if a sequence possesses a limit, then there is exactly one real number that satisfies the Weierstrass definition. Finally, although the numbers in the sequence tend to the limit 0, they need not actually reach that value.

Continuity of functions

The same basic approach makes it possible to formalize the notion of continuity of a function. Intuitively, a functionf(t) approaches a limit L as t approaches a value p if, whatever size error can be tolerated, f(t) differs from L by less than the tolerable error for all t sufficiently close to p. But what exactly is meant by phrases such as “error,” “prepared to tolerate,” and “sufficiently close”?

Just as for limits of sequences, the formalization of these ideas is achieved by assigning symbols to “tolerable error” (ε) and to “sufficiently close” (δ). Then the definition becomes: A function f(t) approaches a limit L as t approaches a value p if for all ε > 0 there exists δ > 0 such that |f(t) − L| < ε whenever |t − p| < δ. (Note carefully that first the size of the tolerable error must be decided upon; only then can it be determined what it means to be “sufficiently close.”)

Having defined the notion of limit in this context, it is straightforward to define continuity of a function. Continuous functions preserve limits; that is, a function f is continuous at a point p if the limit of f(t) as t approaches p is equal to f(p). And f is continuous if it is continuous at every p for which f(p) is defined. Intuitively, continuity means that small changes in t produce small changes in f(t)—there are no sudden jumps.

Properties of the real numbers

Earlier, the real numbers were described as infinite decimals, although such a description makes no logical sense without the formal concept of a limit. This is because an infinite decimal expansion such as 3.14159… (the value of the constant π) actually corresponds to the sum of an infinite series 3 + 1/10 + 4/100 + 1/1,000 + 5/10,000 + 9/100,000 +⋯,and the concept of limit is required to give such a sum meaning.

It turns out that the real numbers (unlike, say, the rational numbers) have important properties that correspond to intuitive notions of continuity. For example, consider the function x2 − 2. This function takes the value −1 when x = 1 and the value +2 when x = 2. Moreover, it varies continuously with x. It seems intuitively plausible that, if a continuous function is negative at one value of x (here at x = 1) and positive at another value of x (here at x = 2), then it must equal zero for some value of x that lies between these values (here for some value between 1 and 2). This expectation is correct if x is a real number: the expression is zero when x = √2 = 1.41421…. However, it is false if x is restricted to rational values because there is no rational number x for which x2 = 2. (The fact that √2 is irrational has been known since the time of the ancient Greeks. SeeSidebar: Incommensurables.)

In effect, there are gaps in the system of rational numbers. By exploiting those gaps, continuously varying quantities can change sign without passing through zero. The real numbers fill in the gaps by providing additional numbers that are the limits of sequences of approximating rational numbers. Formally, this feature of the real numbers is captured by the concept of completeness.

One awkward aspect of the concept of the limit of a sequence (an) is that it can sometimes be problematic to find what the limit a actually is. However, there is a closely related concept, attributable to the French mathematician Augustin-Louis Cauchy, in which the limit need not be specified. The intuitive idea is simple. Suppose that a sequence (an) converges to some unknown limit a. Given two sufficiently large values of n, say r and s, then both ar and as are very close to a, which in particular means that they are very close to each other. The sequence (an) is said to be a Cauchy sequence if it behaves in this manner. Specifically, (an) is Cauchy if, for every ε > 0, there exists some N such that, whenever r, s > N, |ar − as| < ε. Convergent sequences are always Cauchy, but is every Cauchy sequence convergent? The answer is yes for sequences of real numbers but no for sequences of rational numbers (in the sense that they may not have a rational limit).

A number system is said to be complete if every Cauchy sequence converges. The real numbers are complete; the rational numbers are not. Completeness is one of the key features of the real number system, and it is a major reason why analysis is often carried out within that system.

The real numbers have several other features that are important for analysis. They satisfy various ordering properties associated with the relation less than (<). The simplest of these properties for real numbers x, y, and z are:

a. Trichotomy law. One and only one of the statements x < y, x = y, and x > y is true.

b. Transitive law. If x < y and y < z, then x < z.

c. If x < y, then x + z < y + z for all z.

d. If x < y and z > 0, then xz < yz.

More subtly, the real number system is Archimedean. This means that, if x and y are real numbers and both x, y > 0, then x + x +⋯+ x > y for some finite sum of x’s. The Archimedean property indicates that the real numbers contain no infinitesimals. Arithmetic, completeness, ordering, and the Archimedean property completely characterize the real number system.