I often hear the advice, "Read the masters" (i.e., read old, classic texts by great mathematicians). But frankly, I have hardly ever followed it. What I am wondering is, is this a principle that people give lip service to because it sounds good, but which is honored in the breach more than in the observance? If not, which masterworks have you found to be most enlightening?

To keep the question focused, let me lay down some ground rules.

List only papers/books from the 19th century or earlier. I recognize that this is an arbitrary cutoff but I want to draw a line somewhere.

It must be something that you personally have read in its entirety (or almost in its entirety). I'm not really interested in secondhand evidence ("So-and-so says that X is a must-read").

You must have acquired important mathematical insights (not just historical insights) from the paper/book that you feel that you would never have acquired had you restricted your reading to 20th-century and 21st-century literature. It's not enough, for my purposes, that you found the paper/book "interesting" but not really essential. If possible, briefly describe these insights in your response.

[Edit: In response to a comment that suggested that I have set the bar impossibly high, let me violate one of my own ground rules and point to this discussion on the $n$-Category Cafe that gives some secondhand examples. That discussion should also help to clarify what I am asking for more examples of.]

Gauss' Disquisitiones proves quadratic reciprocity by induction on primes. So it is possible to prove theorems about primes by induction, which seems very counterintuitive at first. Such an inductive technique was recently used to great effect in the Khare-Wintenberger proof of Serre's conjecture on the modularity of 2-dimensional mod $p$ Galois representations.
–
BoyarskyJun 15 '10 at 16:17

19

I'm a big advocate of "reading the masters", but I usually interpret this as meaning reading papers from, say, 1900 to 1950. But this is largely because of my research interests (ie geometric topology -- I'm learned a lot by reading Dehn, Nielsen, Alexander, Reidemeister, ...). If I were more interested in things like number theory, then it would make more sense to go back and read pre-1900 sources.
–
Andy PutmanJun 15 '10 at 16:38

23

I don't agree with your interpretation of the word "master". If, say, I were interested in learning Morse theory, then surely reading Bott (1923-2005) would qualify as "reading a master".
–
FaisalJun 15 '10 at 16:48

23

Your rules 1 and 3 set the bar almost impossibly high: valuable mathematical insights contained in a paper 110 years old at least have in all likelihood permeated the mathematical community by now, so that requiring that you would never have acquired them by reading more recent sources seems almost contradictory. Anyway, I have always had an even broader understanding of the saying than A.Putman: I understand it as "read the original research material, not the derivative works". And I think this is a great advice!
–
OlivierJun 15 '10 at 17:06

25

When I was a young student, I read the advice of Abel (in a second-hand source) about reading the masters and not the pupils, and I applied it. However, I did not take this to mean "read the ancients". Rather, I think it applies in general to try and read original papers in which insights are developed and new points of view presented, rather than simply read text-book accounts or later expository presentations. Interpreted this way, I have certainly followed this maxim. However, the masters were all 20th century, with one or two exceptions.
–
EmertonJun 15 '10 at 23:22

19 Answers
19

I agree 100% with Igor and Andrew L., on the benefit of reading the creator's version of the same thing available from later expositors. I have gained mathematical insights from reading Euclid, Archimedes, Riemann, Gauss, Hurwitz, Wirtinger, as well as moderns like Zariski.... on topics I already thought I understood.

Just Euclid's use of the word "measures" for "divides" finally made clear to me the elementary argument that the largest number dividing 2 integers is also the smallest positive number one can measure using both of them. This is clear thinking of (commensurable) measuring sticks, since by translating it is obvious the set of lengths that one can so measure are equally spaced, hence the smallest one would measure them all.

I was unaware also that Euclid's characterization of a tangent line to a circle was not just that it is perpendicular to the radius, but is the only line meeting the circle locally once and such that changing its angle ever so little produces a second intersection, i.e. Newton's definition of a tangent line. It is said Newton read Euclid just before giving his own definition.

I did not realize until reading Archimedes that the "Cavalieri principle" follows just from the definition of the Riemann integral, without needing the fundamental theorem of calculus. I.e. it follows just from the definition of a volume as a limit of approximating slices, and was known to Archimedes. Hence one can conclude all the usual volume formulas for pyramids, cones, spheres, even the bicylinder, just by starting from the decomposition of a cube into three right pyramids, applying Cavalieri to vary the angle of the pyramid, then approximating and using Cavalieri. It is an embarrassment to me that I had thought the volume of a bicylinder a more difficult calculus problem that that for a sphere, when it follows immediately from comparing horizontal slices of a double square based pyramid inscribed in a cube. I.e. by Cavalieri and the Pythagorean theorem, the volume of a sphere is the difference between the volumes of a cylinder and an inscribed double cone. The same argument shows the volume of a bicylinder is the difference between the volumes of a cube and an inscribed double square based pyramid. This led to an intuitive understanding of the simple relation between the volumes of certain inscribed figures that I then noticed had been recently studied by Tom Apostol.

I realized this summer that this allows a computation of the volume of the 4 dimensional ball. I.e. this ball results from revolving half a 3 ball, hence can be calculated by revolving a cylinder and subtracting the volume of revolving a cone. Since Archimedes knew the center of gravity of both those solids he knew this.

Having read everywhere that Hurwitz' theorem was that the maximum number of automorphisms of a Riemann surface of genus g is 84(g-1), I had a difficult proof that the maximum number in genus 5 is 192, using Jacobians, Prym varieties, and classifications of representations of planar groups, until Macbeath referred me to Hurwitz' original paper where a complete list of the possible orders was easily given: 84(g-1), 48(g-1),....I subsequently explained this easy argument to some famous mathematical figures. Sometime later a more complicated such example for which Macbeath himself was usually credited was found also to occur in the 19th century literature.

Having studied Riemann surfaces all my life, but unable to read German well, I thought I had acquired some grasp of the Riemann Roch theorem, in particular I thought Riemann had given only an inequality l(D) ≥ 1-g + deg(D). When the translation from Kendrick press became available, I learned he had written down a linear map whose kernel computed l(D), and the estimate derived from the fundamental theorem of linear algebra. The full equality also follows, but only if one can compute the cokernel as well. That cokernel of course was already shown by him to be what we now call H^1(D). Hence Riemann's original theorem was the so called "index" version of RR. Since he expressed his map in terms of path integrals, it was natural to evaluate those integrals by residue calculus as Roch did. This is explained in my answer to "why is Riemann Roch [not precisely] an index problem?" Although there are many fine modern expositions of Riemann Roch, the most insightful perhaps being that in the chapter on Riemann surfaces in Griffiths and Harris, I had not seen how simple it was until reading Riemann.

Perhaps this is only historical knowledge, but reading Riemann one sees that he also knew completely how to prove (index) Riemann Roch for algebraic plane curves, without appealing to the questionable Dirichlet principle, hence the usual impression that a rigorous proof had to await later arguments of Clebsch, Hilbert, or Brill and Noether, is incorrect.

Reading Wirtinger's 19th century paper on theta functions, even though unfortunately for me only available in the original German, I learned that when a smooth Riemann surface acquires a singularity, the elementary holomorphic differential with a non zero period around that vanishing cycle, becomes meromorphic, and that period becomes the residue at the singuklar point. At last this explains clearly why one defines "dualizing differentials" as one does, in algebraic geometry.

Once as grad student in Auslander's algebraic geometry class, I vowed to try out Abel's advice and read the master Zariski's paper on the concept of a simple point. I was very discouraged when several hours passed and I had managed only a few pages. Upon returning to class, Auslander began to pepper us with questions about regular local rings. I found out how much I had learned when I answered them all easily until he literally told me to be quiet, since I obviously knew the subject cold. (To be honest, I did not know the very next question he posed, but I was off the hook.)

In my answer to a question about where to learn sheaf cohomology I have given an example of insight only contained in Serre's original paper.

The sense of wonder and awe one gets upon reading people like Riemann or Euler, is also quite wonderful. Any student who has struggled to compute the sum of the even powers of the reciprocals of natural numbers 1/n^2k, will be amazed at Euler's facile accomplishment of this for many values of k. Calculus students estimating π by the usual series to 3 or 4 places will also be impressed at his scores of correct digits. On the other hand, anyone using a modern computer can detect an actual error in his expansion of π, I forget where, in the 214th place? but an error which was already noticed long ago.

As you can see these are elementary examples hence from a fairly naive and uneducated person, myself, who has not at all plumbed the depth of many original papers. But these few forays have definitely convinced me there is a benefit that cannot be gained elsewhere, as these exposures can transform the understanding of ordinary mortals closer to that of more knowledgeable persons, at least in a narrow vein. So while it might be thought that only the strongest mathematicians can attempt these papers, my advice would be that reading such masters may be even more helpful to us average students.

As a remark on criterion 2 of the original question, I find it is not at all necessary to read all of a paper by a master to get some insight. One word in Euclid enlightened me, and before the translation came out, I had already gained most of my understanding of Riemann's argument for RR just from reading the headings of the paragraphs. I learned a proof of RR for plane curves from reading only the introduction to a paper of Fulton. A single sentence of Archimedes, that a sphere is a cone with vertex at the center and base equal to the surface, makes it clear the volume is 1/3 the surface area. Moreover this shows the same ratio holds for a bicylinder, whereas the area of a bicylinder is considered so difficult we do not even ask it of calculus students. So one should not be discouraged by the difficulty of reading all of a masters' paper, although of course it wouldn't hurt.

A remark on the definition of master, versus creator. There are cases where a later master re - examines an earlier work and adds to it, and in these cases it seems valuable to read both versions. In addition to examples given above of Newton generalizing Euclid and Mumford using Hilbert, perhaps Mumford's demonstration of the power of Grothendieck's Riemann Roch theorem in calculating inavriants of moduli space of curves is relevant.

A related question occurs in many cases since the classical arguments of the "ancients" are preserved but only in classical texts such as Van der Waerden in algebra, and newer books have found slicker methods to avoid them. E.g. the method of LaGrange resolvents is useful in Galois theory for proving an extension of prime degree in characteristic zero is radical. There are faster less precise methods of showing this such as Artin/Dedekind's method of independence of characters, but the older method is useful when trying to use Galois theory to actually write down solution formulas of cubics and quartics. Thus today we often have an intermediate choice of reading modern expositions which reproduce the methods of the creators, or ones that avoid them, sometimes losing information. (This is discussed in the math 843-2 algebra notes on my web page, where, being a novice, I give all competing methods of proof.)

Other items, inspired by perusing some modern books, include the fact that Riemann proves "Lebesgue's criterion" for Riemann integrability on the page after he gives his definition of the integral, years before Lebesgue's birth. In the section on surface topology Riemann also uses the "Steinitz exchange" argument to prove invariance of the cardinality of a minimal set of homology generators, some 17 years before Steinitz' birth.
–
roy smithJan 15 '11 at 15:32

7

I just read Euler's explanation of "Cardano's" cubic formula and it looked easy for the first time. x = (a+b) is always a solution of the special cubic x^3 = 3abx + (a^3+b^3). But every cubic x^3=fx+g has this form since f = 3ab, and g = a^3+b^3, determine the sum and product of a^3 and b^3, hence determine t = a^3, by solving the quadratic t^2 - gt + f^3/27 = 0. Then taking any cube root of a^3 gives a, and then b= f/3a. This also shows why you always need complex numbers to get all three roots this way, even when they are real, since you need all three cube roots of a^3.
–
roy smithMay 31 '11 at 23:02

2

Dear Roy, I am pleased to see that you mention reading Zariski. Reading parts of his papers (including the one on simple points of varieties) was something I also did when trying to learn algebraic geometry. (Incidentally, his report on coherent sheaf cohomology from the 1950s remains one of the best summaries of the subject that I know of.) Best wishes, Matthew
–
EmertonOct 1 '11 at 3:51

1

Dear Matthew, Thank you. One testimony to the timeless value of Zariski's work is that apparently you were not yet born when I got the same boost from that paper on simple points. I did not then understand well the sheaf theory report. Thank you for the tip to reconsider it!
–
roy smithOct 2 '11 at 3:53

In algebraic number theory, the existence of a Frobenius element
at any prime $p$ in a Galois extension $K/{\mathbf Q}$ is crucial. That is, for any
prime ideal $\mathfrak p$ lying over $p$ in $K$ there is some
$\sigma \in {\rm Gal}(K/{\mathbf Q})$ that looks like the $p$-th power map mod $\mathfrak p$:
$$
\sigma(\alpha) \equiv \alpha^p \bmod \mathfrak p
$$
for all $\alpha$ in the integers of $K$. (This can be jazzed up to the relative case, but I'll keep the base field as $\mathbf Q$ for simplicity here.)

In any modern reference I have seen which shows the existence of $\sigma$, first the decomposition field is introduced in order to make a reduction to the case where the base field is the decomposition field. But if you look at the original proof by Frobenius (1896) it is different, using multivariable polynomials in an interesting way and there is no decomposition field. The argument fits in one page; see http://www.math.uconn.edu/~kconrad/blurbs/gradnumthy/frobeniuspf.pdf, where I consider a fairly general setup using the method of Frobenius. This nice proof by Frobenius has been completely forgotten, even though it handles the general case. (Frobenius himself worked with base field ${\mathbf Q}$.)

What is the mathematical insight here? That you can prove this theorem without having to mention decomposition fields (which also makes it easier for students new to the subject to follow the proof). I found this essential when teaching a course on algebraic number theory since it meant I did not have to introduce decomposition fields in the lectures at all; they could safely be left to homework assignments, if I so chose. The proof is also a nice illustration of the usefulness of multivariable polynomials, especially considering that a lot of basic algebraic number theory only requires polynomials in one variable.

Another "old school" proof of this result is in Hilbert's Zahlbericht, which has been translated into english.
–
Noah SnyderJun 15 '10 at 22:41

1

To add detail to Noah's comment, the proof in the English translation of the Zahlbericht is within the proof of Theorem 69 on pp. 82--83; the notation s_1,...,s_M used there refers to all the elements of the Galois group (M is the degree of the Galois extension of Q). The Zahlbericht's overarching influence is probably the reason nobody remembers the proof by Frobenius.
–
KConradJun 15 '10 at 23:13

"Read the masters" should not be taken as blanket advice, because some
masters are much easier to read, or more congenial to modern mathematicians,
than others. Some 19th century works that I have learned from are:

Dirichlet/Dedekind. Dirichlet's Lectures on Number Theory, edited and
supplemented by Dedekind, are very clear and inspiring. They cover everything
from the basics up to Dirichlet's own breakthroughs on class numbers and
primes in arithmetic progressions.

Dedekind's Theory of Algebraic Integers. Dedekind wrote this because
he was disappointed with the initial response to his theory of ideals. He goes
to great pains to motivate the theory from the problem of unique prime
factorization (using the now standard example of $\mathbb{Z}[\sqrt{-5}]$).

Poincare's papers on automorphic functions. Whether or not you want to
know about automorphic functions, these papers are a great introduction to
hyperbolic geometry, fuchsian groups, and Kleinian groups. Like Dedekind,
Poincare writes very clearly and simply.

Disclaimer. These are all books that I translated, so naturally I think they
are good. If you can read them in the original language they are probably
even better.

I should add that I came to these books after being disappointed with
certain 20th
century books, which seemed to me too terse, unmotivated, and abstract.
If you haven't had this experience, then you probably won't enjoy 19th century
books.

Could you provide a reference to one, maybe two specific papers of Poincaré you would consider introductory (w.r.t. automorphic forms)?
–
Konrad VoelkelJun 16 '10 at 10:28

3

Konrad, the best two to start with are Poincare's first and second papers in Acta Mathematica, Volume I (1882), pp. 1-62 and pp. 193-294. In my translation, Poincare's Papers on Fuchsian Functions, they are entitled "Theory of Fuchsian Groups" (which contains the basic geometry and group theory) and "On Fuchsian Functions" (which introduces automorphic forms and functions).
–
John StillwellJun 16 '10 at 11:16

There is more than one reason to read "masters". One such reason is field-specific and can be phrased as "read the latest work right before a scientific revolution" (standard example is the large body of work by Cayley, Sylvester, Gordan, etc., in the pre-Hilbert classical invariant theory). Often such results are more powerful in very specific cases of interest.

Another practical reason to read "masters" is to avoid embarrassment. Lots of (mostly minor) results are not mentioned in later treatises, so a number of people rediscover these results because they are either too lazy to read, or simply assume that "masters" couldn't have possibly be so smart to figure out these results back then... When going through the references in writing this survey, I read all 80 pages of J.J. Sylvester, A constructive theory of partitions, arranged in three acts, an interact and an exodion, Amer. J. Math.5 (1882), 251–330. As a result, I discovered that a number of recent results were already proved there, sometimes by leaders in the field (let me not name them here - see the survey).

Riemann's original paper Über die Anzahl der Primzahlen unter einer gegebenen Grösse (On the Number of Primes Less Than a Given Magnitude), 1859, is definitely a master well worth reading. In just 8 or so pages he shows how useful the zeta function is for questions about the primes, proves the functional equation, the explicit formula, and makes several deep and far-reaching conjectures (all proven except one infamous example).

This is the paper which (arguably) began the extremely fruitful method of applying complex analysis to number theoretic questions. It lacks details in some places, but it contains a lot of invaluable motivation and exposition.

It certainly helped me to understand why complex analysis is so useful, and how one might discover these connections for himself.

I interpret "read the masters" as advice to learn a theory from those who created it. Often they are mathematicians of higher caliber than those who follow, and hence they offer unique insights missed by later expositions; insights missed either because they were never understood, or as they are considered common knowledge, the "new normal". Examples I have in mind are Thurston's "Notes", Wall's and Browder's books on surgery theory, Gromov's "Hyperbolic groups".

I certainly have read a lot of classics, and have learned a lot (mostly about historical developments) even from Euclid's elements. When it comes to research mathematics, I'd at least like to mention that Weil got the idea for the Weil conjectures from reading Gauss's articles on biquadratic residues.

As an example more in line with the question let me add that, in a letter to Goldbach dated April 15, 1749, Euler mentions that he has found, after quite some effort, a parametrized solution of the equation $xyz(x+y+z) = a$. Elkies found a way of deriving Euler's solution and found a simpler one,
using methods from modern algebraic geometry. How Euler found his solution is still open.

While I feel it's certainly worthwhile to read the masters (by which, I mean the initial works that created entire fields of mathematics by their founders), my reasoning is somewhat different then most. Reading the masters is really more for conceptual depth than actual mathematical enlightenment.

There's a myth surrounding Abel's dictum that stems from the unreadability of the masters like Gauss as a measure of their nearly inhuman brilliance. This is a fallacy. The reason the masters are so difficult to read is because we are catching them with their pants down in the act of creation, i.e. they are groping towards the right notation and terminology, but aren't quite there yet. For example, it's pretty clear Riemann in his doctoral lecture was trying to explain the need for higher dimensional spaces that went beyond familiar three dimensional space ("multiply extended quantities") which preserved all the familiar properties of the usual Euclidean spaces, i.e. Kleinian transformations and calculus in local neighborhoods. The problem was without either linear algebra or the fundamentals of topology, it was next to impossible to express this idea clearly and precisely. He just ends up babbling on about what's needed. But all the same, Riemann recognized what was needed even if how to express it correctly was beyond his ability.

A more recent and readily available example will clarify this further: One of my favorite books is Hassler Whitney's Geometric Integration Theory. I have friends in differential geometry who tell me it's a dinosaur, that his proof of the de Rham theorem is incredibly coarse and tedious. Yes, it is — but it has the advantage of being a DIRECT proof from the construction of simplexes on the boundary of an embedded manifold. I love the book because although Whitney's ideas were old fashioned, they were incredibly powerful IDEAS that allow us to tackle the subject concretely and with an amazing amount of insight. THAT'S what we get from reading the masters — their insight and depth of understanding that allows us to see beyond the machinery into why things are defined as they are.

This post is entirely due to Andrew L. All I did was minor edits (mostly adding in spaces, a couple paragraph breaks).
–
Charles StaatsJun 16 '10 at 23:24

Thanks,Charles. I have proper spacing when I type it,but for some reason,it doesn't carry over when posted.I have no idea what I'm doing wrong yet.
–
Andrew LJun 17 '10 at 0:58

Spivak's "comprehensive introduction to differential geometry" has reproductions of a few of Riemann's papers, and I think (and he may even say) that this is the reason he included them. Obviously, the book contains all the technical details one needs to learn basic differential geometry, but Riemann's discussions can be intuitively enlightening, even though they may look technically archaic now. Though you do say the Masters are more "difficult" to read--I would argue this isn't always the case; some times they can be much easier, particularly when careful rigor can obscure what's going on.
–
jeremyJun 17 '10 at 1:44

3

@jeremy I agree-and I cite as good examples Banach's original treatise on functional analysis,Kuratowski's 2 volume treatise on point set topology and Cathedothorey and Hahn's treatises on real variables.
–
Andrew LJun 17 '10 at 3:16

I am no big shot in reading mathematical papers, but there are a few examples where I think "Reading the masters" is really worthwhile:

I read the theory of irrationals from Hardy's Pure Mathematics and found mention of Dedekind's original and concise version "Stetigkeit und Irrationale Zahlen". I read the English translation of this paper and I must say it is better than anything we can find in modern books on real analysis simply because it is easier to comprehend and appreciate. I somehow find this approach to irrationals the simplest requiring least mathematical machinery like you can teach this to a 15 year old kid with just the working knowledge of rationals.

Next I read lot of proofs of Jacobi's Triple product identity from wikipedia and other sources, but none of them matches the simplicity of the one given in Fundamenta Nova of Jacobi. He just multiplies the factors on one side and gets the infinite series on the other side. Plain simple multiplication like 2x3 = 6. In fact his whole theory of Elliptic functions as presented in Fundamenta Nova is based on integral transformation theory and is much easier to introduce than the modern modular form approach.

Another example is Lambert's proof of irrationality of pi based on the continued fraction expansion of tan(x). The real gem is not the irrationality of pi, but the beautiful formula for tan(x) (much more beautiful than Taylor's series for sin(x) and cos(x)). Compared to his proof, the proof of irrationality of pi by Ivan Niven is quite short and simple to grasp, but is highly non-obvious.

Best of all examples is the theory of periods in Disquisitiones Arithmeticae by Gauss. The entire proof of construction of regular polygons based on theory of periods is not to be found in modern texts at least in the form accessible to first year undergraduates. This theory and its application to constructions of polygons is so exciting and awe inspiring. No parallel in modern papers.

In my view the modern authors have made it a habit to work in too abstract terms so that only a post-graduate student of mathematics can understand the papers. There are few exceptions definitely and these are the ones I read online, but as a majority the books/papers on maths are increasingly becoming inaccessible to anyone other than a mathematics researcher.

Once I needed a good exposition of Newton's polygon (for my lectures). It is hard to find in modern books. I asked a great expert in my department (an expert in both algebraic geometry and
in classical literature, Shreeram Abhyankar): Where to find a good exposition of Newton's
polygon, with examples? His answer was: In Mathematical Papers of Newton.
Then he added: In Chrystal's Algebra (1879). This is a British high school algebra textbook. I read both, and I am very satisfied, and used this in my lectures.

Although your rule #1 would bar the following, I don't think anybody would disagree with me if I call Grothendieck a master. I cannot say I have read the whole of EGA, but I did go through most of volume 1 and a big chunk of volume 2, and got a lot out of it. The clarity of exposition is superb. If only it had examples...

@Alberto: there is an example buried in there! If only you waded through the dense super-general blow-ups in the final section of EGA II (the one badly written section) you'd have encountered an example in Remark 8.3.9: group schemes as examples of representable functors, with emphasis on the example GL_1 (so really killing a fly with a sledgehammer).
–
BoyarskyJun 16 '10 at 0:22

@Pencil: some parts of EGA (I, II, some parts of III) are much more readable than Hartshorne, in my opinion. Later on the level of generality makes it difficult to follow without losing sight of the applicability of the results to "everyday" situations.
–
Alberto García-RabosoJun 16 '10 at 9:33

2

@Mariano: are you mixing up the crazy section 8 of EGA I (on "Chevalley schemes", the local ring business) with section 7.4 of EGA II (on algebraic curves, for which the only explicit curve is the projective line)? I think all explicit examples in EGA are only there as counterexamples to weakening hypotheses of results. For example see IV$_2$, 4.5.12(ii), 5.6.11; IV$_3$, 6.15.2 (actual equations in 3 unknowns), 14.1.5, 15.2.4; IV$_4$, 18.7.7. @Alberto: the trick to IV is reading backwards from interesting theorems.
–
BCnrdJun 17 '10 at 4:59

1

In regard to Grothendieck's Tohoku, I still recall after several decades, his advice that "il est prudent" to assume that HOM(A,B) and HOM(X,Y) are disjoint unless A=X and B = Y!
–
roy smithJan 17 '11 at 1:04

Whether or not Alfredo Capelli's papers about the Capelli identity fit the rubric of "old, classic texts by great mathematicians" is open to debate. What is true is that they offer a very clear and surprisingly modern perspective on the "first fundamental theorem of invariant theory for the general linear group" (the terminology is due to Hermann Weyl, who used Capelli's method in his book "Classical groups", but in an oblique and essentially incomprehensible way). In particular, Capelli introduced the universal enveloping algebra of $\mathfrak{gl}_n$ and its center and computed the action of the special central elements that he constructed on the polynomial algebra over matrices, deriving the Gordan – Capelli decomposition (or ($GL_n, GL_m$)-duality). Roger Howe, beginning in the late 1980s, had produced the only faithful modern account of Capelli's approach that I was aware of at the time that I read Capelli's papers. Of course, since it was Roger who introduced me to this area, I read his 20th century exposition first!

P.S. From the wording of the question, I get an impression that the rules have been rigged in order to confirm the favored hypothesis. I have several more worthy examples of "read the masters", but I feel as if I would need to argue the case more than I care to.

He did, absolutely, and on the top of that, he proved the Harish-Chandra isomorphism (also for $\mathfrak{gl}_n$), although almost certainly not Kostant's theorem that $U(\mathfrak{gl}_n)$ is free over its center. What really shocked me is that the work of Capelli on the universal enveloping algebra is not even mentioned in either Hawkins' or Borel's books on the history of Lie theory.
–
Victor ProtsakJul 9 '10 at 4:26

Boole, George (1854), An Investigation of the Laws of Thought on Which are Founded the Mathematical Theories of Logic and Probabilities, Macmillan Publishers, 1854. Reprinted with corrections, Dover Publications, New York, NY, 1958.

I have read Gauss Disquisitiones Arithemeticae (Tr AA Clarke SJ) and Euclid's elements (tr & commentary by Heath) - but both well after engaging with the fields they covered. To an extent the subject had moved on, and later insights had provided better definitions, postulates, generalisations, primitive concepts and theorems. But it did become clear in the reading what had motivated further development. The sheer ingenuity required by Euclid (reread over Christmas) to do arithmetic is a joy to behold, even though we would do it very differently now, and rather tedious on a second reading, Some of the foundational subtleties are also glossed over or rather taken for granted in more modern treatments. With both there is also a kind of pedagogical simplicity by which you arrive suddenly and without apparent effort at a significant result - I guess this is what constitutes true mastery.

Animadversion 1

One of the reasons to “Read the Masters!” is that you almost always learn how different their actual intellectual contexts, motivations, and reasoning were from what you tend to find in the reports of $2$nd, $3$rd, and $n$th hand sources.

In the case of Aristotle, one of the first shocks — that I still distinctly remember — was discovering that he was a far less binary, dichotomous, or dualistic thinker than all my previous readings and teachers had told me. This has a bearing that goes far beyond the purely historical interest to the substantive issue of how deductive reasoning proper relates to what was later described as "inductive" and "abductive" inference.

Animadversion 2

I call it “mathematics” when I see hints of form that inform and rule the appearances in view. The test of a “practically essential” source, ancient or modern, is much like the test of a chemical catalyst — it is not that we'd never get the desired product by any other reaction pathway, but that we'd be highly unlikely to get it anywhere near as easily in our lifetime. It is very often the forms that permeate our current airs of knowledge that we, like the proverbial fish in water, can hardly see for all their pervasion.

Animadversion 3

Another reason to study our mathematical organon in embryo is that it makes it easier to see the early integuments and initial embeddings of topics that grow detached and remote from each other as they develop. By way of example, here's a draft of an essay I started on the precursors of category theory.

What mathematical insights were gained from Aristotle not readily found in more modern sources (in accordance with rule #3 above)?
–
BoyarskyJun 15 '10 at 16:46

Re: Boyarsky –– I added my response above as it was too long to fit here.
–
Jon AwbreyJun 15 '10 at 17:22

6

I would classify this interest as philosophical (or maybe psychological or sociological), but not mathematical.
–
Alexander WooJun 15 '10 at 21:48

3

Re: Alexander Woo –– The request was, "You must have acquired important mathematical insights (not just historical insights) from the [source] that you feel that you would never have acquired had you restricted your reading to 20th-century and 21st-century literature". The word "feel" asks for a personal probability, and I gave my own best estimate. I have noticed that different folks use different charts for mapping the territories of logic, mathematics, philosophy, and science. Until we find an atlas for the manifold each of us is probably stuck with using his own coordinate system.
–
Jon AwbreyJun 15 '10 at 23:54

I added some remarks above on the test of mathematical substance and the "never would have got it any other way" criterion for an essential source.
–
Jon AwbreyJun 16 '10 at 13:30