During the second half of the 19th century, various important mathematical advances led to the study of sets in which any two elements can be added or multiplied together to give a third element of the same set. The elements of the sets concerned could be numbers, functions, or some other objects. As the techniques involved were similar, it seemed reasonable to consider the sets, rather than their elements, to be the objects of primary concern. A definitive treatise, Modern Algebra, was written in 1930 by the Dutch mathematician Bartel van der Waerden, and the subject has had a deep effect on almost every branch of mathematics.

Basic algebraic structures

In itself a set is not very useful, being little more than a well-defined collection of mathematical objects. However, when a set has one or more operations (such as addition and multiplication) defined for its elements, it becomes very useful. If the operations satisfy familiar arithmetic rules (such as associativity, commutativity, and distributivity) the set will have a particularly “rich” algebraic structure. Sets with the richest algebraic structure are known as fields. Familiar examples of fields are the rational numbers (fractions a/b where a and b are positive or negative whole numbers), the real numbers (rational and irrational numbers), and the complex numbers (numbers of the form a + bi where a and b are real numbers and i2 = −1). Each of these is important enough to warrant its own special symbol: Q for the rationals, R for the reals, and C for the complex numbers. The term field in its algebraic sense is quite different from its use in other contexts, such as vector fields in mathematics or magnetic fields in physics. Other languages avoid this conflict in terminology; for example, a field in the algebraic sense is called a corps in French and a Körper in German, both words meaning “body.”

In addition to the fields mentioned above, which all have infinitely many elements, there exist fields having only a finite number of elements (always some power of a prime number), and these are of great importance, particularly for discrete mathematics. In fact, finite fields motivated the early development of abstract algebra. The simplest finite field has only two elements, 0 and 1, where 1 + 1 = 0. This field has applications to coding theory and data communication.

The basic rules, or axioms, for addition and multiplication are shown in the table, and a set that satisfies all 10 of these rules is called a field. A set satisfying only axioms 1–7 is called a ring, and if it also satisfies axiom 9 it is called a ring with unity. A ring satisfying the commutative law of multiplication (axiom 8) is known as a commutative ring. When axioms 1–9 hold and there are no proper divisors of zero (i.e., whenever ab = 0 either a = 0 or b = 0), a set is called an integral domain. For example, the set of integers {…, −2, −1, 0, 1, 2, …} is a commutative ring with unity, but it is not a field, because axiom 10 fails. When only axiom 8 fails, a set is known as a division ring or skew field.

Field axioms

axiom 1

Closure: the combination (hereafter indicated by addition or multiplication) of any two elements in the set produces an element in the set.

axiom 2

Addition is commutative: a + b = b + a for any elements in the set.

axiom 3

Addition is associative: a + (b + c) = (a + b) + c for any elements in the set.

axiom 4

Additive identity: there exists an element 0 such that a + 0 = a for every element in the set.

axiom 5

Additive inverse: for each element a in the set, there exists an element -a such that a + (-a) = 0.

axiom 6

Multiplication is associative: a(bc) = (ab)c for any elements in the set.

The discovery of rings having noncommutative multiplication was an important stimulus in the development of modern algebra. For example, the set of n-by-nmatrices is a noncommutative ring, but since there are nonzero matrices without inverses, it is not a division ring. The first example of a noncommutative division ring was the quaternions. These are numbers of the form a + bi + cj + dk, where a, b, c, and d are real numbers and their coefficients 1, i, j, and k are unit vectors that define a four-dimensional space. Quaternions were invented in 1843 by the Irish mathematician William Rowan Hamilton to extend complex numbers from the two-dimensional plane to three dimensions in order to describe physical processes mathematically. Hamilton defined the following rules for quaternion multiplication: i2 = j2 = k2 = −1, ij = k = −ji, jk = i = −kj, and ki = j = −ik. After struggling for some years to discover consistent rules for working with his higher-dimensional complex numbers, inspiration struck while he was strolling in his hometown of Dublin, and he stopped to inscribe these formulas on a nearby bridge. In working with his quaternions, Hamilton laid the foundations for the algebra of matrices and led the way to more abstract notions of numbers and operations.

Rings

In another direction, important progress in number theory by German mathematicians such as Ernst Kummer, Richard Dedekind, and Leopold Kronecker used rings of algebraic integers. (An algebraic integer is a complex number satisfying an algebraic equation of the form xn + a1xn−1 + … + an = 0 where the coefficients a1, …, an are integers.) Their work introduced the important concept of an ideal in such rings, so called because it could be represented by “ideal elements” outside the ring concerned. In the late 19th century the German mathematician David Hilbert used ideals to solve an old problem about polynomials (algebraic expressions using many variables x1, x2, x3, …). The problem was to take a finite number of variables and decide which ideals could be generated by at most finitely many polynomials. Hilbert’s method solved the problem and brought an end to further investigation by showing that they all had this property. His abstract “hands off” approach led the German mathematician Paul Gordon to exclaim “Das ist nicht Mathematik, das ist Theologie!” (“That is not mathematics, that is theology!”). The power of modern algebra had arrived.

Rings can arise naturally in solving mathematical problems, as shown in the following example: Which whole numbers can be written as the sum of two squares? In other words, when can a whole number n be written as a2 + b2? To solve this problem, it is useful to factor n into prime factors, and it is also useful to factor a2 + b2 as (a + bi)(a − bi), where i2 = −1. The question can then be rephrased in terms of numbers a + bi where a and b are integers. This set of numbers forms a ring, and, by considering factorization in this ring, the original problem can be solved. Rings of this sort are very useful in number theory.

Rings are used extensively in algebraic geometry. Consider a curve in the plane given by an equation in two variables such as y2 = x3 + 1. The curve shown in the A simple algebraic curve.Encyclopædia Britannica, Inc. consists of all points (x, y) that satisfy the equation. For example, (2, 3) and (−1, 0) are points on the curve. Every algebraic function in two variables assigns a value to every point of the curve. For example, xy + 2x assigns the value 10 to the point (2, 3) and −2 to the point (−1, 0). Such functions can be added and multiplied together, and they form a ring that can be used to study the original curve. Functions such as y2 and x3 + 1 that agree with each other at every point of the curve are treated as the same function, and this allows the curve to be recovered from the ring. Geometric problems can therefore be transformed into algebraic problems, solved using techniques from modern algebra, and then transformed back into geometric results.

The development of these methods for the study of algebraic geometry was one of the major advances in mathematics during the 20th century. Pioneering work in this direction was done in France by the mathematicians André Weil in the 1950s and Alexandre Grothendieck in the 1960s.

In addition to developments in number theory and algebraic geometry, modern algebra has important applications to symmetry by means of group theory. The word group often refers to a group of operations, possibly preserving the symmetry of some object or an arrangement of like objects. In the latter case the operations are called permutations, and one talks of a group of permutations, or simply a permutation group. If α and β are operations, their composite (α followed by β) is usually written αβ, and their composite in the opposite order (β followed by α) is written βα. In general, αβ and βα are not equal. A group can also be defined axiomatically as a set with multiplication that satisfies the axioms for closure, associativity, identity element, and inverses (axioms 1, 6, 9, and 10). In the special case where αβ and βα are equal for all α and β, the group is called commutative, or Abelian; for such Abelian groups, operations are sometimes written α + β instead of αβ, using addition in place of multiplication.

The first application of group theory was by the French mathematician Évariste Galois (1811–32) to settle an old problem concerning algebraic equations. The question was to decide whether a given equation could be solved using radicals (meaning square roots, cube roots, and so on, together with the usual operations of arithmetic). By using the group of all “admissible” permutations of the solutions, now known as the Galois group of the equation, Galois showed whether or not the solutions could be expressed in terms of radicals. His was the first important use of groups, and he was the first to use the term in its modern technical sense. It was many years before his work was fully understood, in part because of its highly innovative character and in part because he was not around to explain his ideas—at the age of 20 he was mortally wounded in a duel. The subject is now known as Galois theory.

Group theory developed first in France and then in other European countries during the second half of the 19th century. One early and essential idea was that many groups, and in particular all finite groups, could be decomposed into simpler groups in an essentially unique way. These simpler groups could not be decomposed further, and so they were called “simple,” although their lack of further decomposition often makes them rather complex. This is rather like decomposing a whole number into a product of prime numbers, or a molecule into atoms.

In 1963 a landmark paper by the American mathematicians Walter Feit and John Thompson showed that if a finite simple group is not merely the group of rotations of a regular polygon, then it must have an even number of elements. This result was immensely important because it showed that such groups had to have some elements x such that x2 = 1. Using such elements enabled mathematicians to get a handle on the structure of the whole group. The paper led to an ambitious program for finding all finite simple groups that was completed in the early 1980s. It involved the discovery of several new simple groups, one of which, the “Monster,” cannot operate in fewer than 196,883 dimensions. The Monster still stands as a challenge today because of its intriguing connections with other parts of mathematics.