Jeremy Avigad and Erich Reck in their remarkable historical paper "Clarifying the nature of the infinite: the development of metamathematics and proof theory" claim that one of the factors of becoming of abstract mathematics in the late 19 century (as opposed to concrete mathematics or hard analysis) was the fact, that using more abstract notions we can avoid a lot of calculations to obtain the same result. Let me quote them:

"The gradual rise of the opposing viewpoint, with its emphasis on conceptual
reasoning and abstract characterization, is elegantly chronicled by Stein[110],
as part and parcel of what he refers to as the “second birth” of mathematics.
The following quote, from Dedekind, makes the diﬀerence of opinion very clear:

A theory based upon calculation would, as it seems to me, not oﬀer
the highest degree of perfection; it is preferable, as in the modern
theory of functions, to seek to draw the demonstrations no longer
from calculations, but directly from the characteristic fundamental
concepts, and to construct the theory in such a way that it will, on
the contrary, be in a position to predict the results of the calculation
(for example, the decomposable forms of a degree).

In other words, from the Cantor-Dedekind point of view, abstract conceptual
investigation is to be preferred over calculation."

So, my question is: do you know some concrete examples from concrete fields of avoiding calculation mass by the use of abstract notions? (term "calculation" here means any type of routine technicality). I can't remember where I read it but some examples one can find in category theory and topoi (not sure).

There's the puzzle of the bird darting back and forth between two oncoming trains, and asking how far the bird traveled up to the moment of impact. Unless clearer examples are given, I submit that alternative and simpler calculations may be examples of what you ask (rate*time vs summing an infinite series). Gerhard "Ask Me About System Design" Paseman, 2010.07.01
–
Gerhard PasemanJul 1 '10 at 18:59

8

Galois wrote, before Dedekind: "Since the beginning of the [19th] century, computational procedures have become so complicated that any progress by those means has become impossible, without the elegance which modern mathematicians have brought to bear on their research, and by means of which the spirit comprehends quickly and in one step a great many computations. [...] Classify [operations] according to their complexities rather than their appearances! This, I believe, is the mission of future mathematicians. This is the road on which I am embarking in this work.
–
KConradJul 1 '10 at 19:46

While I entirely agree that there are some excellent examples where abstract thinking is vastly superior to local, computational thinking, I also get the feeling that this has been pushed ``too far''. More precisely, abstraction has so taken over mainstream mathematics that "computational thinking" is becoming a rarer skill (and resurfaced in Computer Science instead?). I applaud the deeper understanding gained by abstract thinking, but deplore the lack of 'intuition' gained by a definite facility with computation. I wish the 'balance point' between these was less skewed.
–
Jacques CaretteJul 1 '10 at 21:43

5

@Jacques: Well, I didn't ask for comparing these two approaches. I did ask about some nice sides of abstract approach.
–
Sergei TropanetsJul 1 '10 at 23:57

15 Answers
15

Hilbert's first work on invariant functions led him to the demonstration in 1888 of his famous finiteness theorem. Twenty years earlier, Paul Gordan had demonstrated the theorem of the finiteness of generators for binary forms using a complex computational approach. The attempts to generalize his method to functions with more than two variables failed because of the enormous difficulty of the calculations involved. Hilbert realized that it was necessary to take a completely different path. As a result, he demonstrated Hilbert's basis theorem: showing the existence of a finite set of generators, for the invariants of quantics in any number of variables, but in an abstract form. That is, while demonstrating the existence of such a set, it was not a constructive proof — it did not display "an object" — but rather, it was an existence proof and relied on use of the Law of Excluded Middle in an infinite extension.

It is strange that so many know the Hilbert vs Gordan story but so few know what is in the 1868 paper by Gordan. It is not a computational approach but a very deep analysis of the graph theoretical structure of the invariants of binary forms.
–
Abdelmalek AbdesselamJul 9 '10 at 3:40

My favorite theorem, the Atiyah-Singer index theorem, seems to have the desired property. The theorem states that the Fredholm index of the Dirac operator on a compact spin manifold $M$ is equal to the $\hat{A}$ genus. There are two essentially different types of proofs: a global, conceptual argument based on little or no calculation; and a detailed local proof involving working with explicit solutions to PDE's. There are many variations and elaborations on the two approaches; here is a basic overview.

Global Proof:
One considers the notion of an index map $K(T^*M) \to \mathbb{Z}$ from the K-theory of the tangent bundle of $M$ to the integers, uniquely characterized by a few key axioms. One then constructs two maps which satisfy the axioms (and hence are equal): an analytic index map built using functional analysis and a topological index map built out of an embedding of $M$ into $\mathbb{R}^n$ and the Thom isomorphism in K-theory. The symbol of an elliptic operator $D$ gives rise to an element of $K(T^*M)$; its analytic index is simply the Fredholm index of $D$, while in the case where $D$ is the Dirac operator the topological index can be identified with the $\hat{A}$ genus (upon taking Chern characters).

Local Proof:
One first proves that the Fredholm index of $D$ is given by $Tr_s(e^{-t D^2})$, the supertrace of the solution operator for the heat equation for $D$. A standard iterative method for solving the heat equation yields an asymptotic expansion for the smoothing kernel $k_t$ of the heat operator, so since the Fredholm index is independent of $t$ one is lead to try to calculate the constant term in the asymptotic expansion of $tr_s(k_t)$. The strategy (as simplified by Getzler) is to develop a symbol calculus for $D$ which rescales away everything but the constant term. One shows that the appropriate symbol satisfies a certain explicit differential equation (the "quantum mechanical harmonic oscillator") which one can explicitly solve (Mehler's formula). The $\hat{A}$ class manifests itself, as if by magic.

I think the Atiyah-Singer index theorem is a particularly great example of what you are referring to because it is very difficult to see why the explicit calculations accomplish the same thing as the global, conceptual arguments. At least, nobody has explained it to me to my satisfaction. For example, Bott periodicity plays an essential role in the construction of both the analytic and topological index maps, but if it makes an appearance in the local proof then it is heavily disguised.

Claim: There are two canonical bialgebra structures (the “additive” and “multiplicative” structures) on $k[x]$, and one of them (the additive one) in fact makes it a Hopf algebra.

Proof 1: (Calculation.) Write down the formulas; check the axioms! This isn't an especially long calculation, but it's a bit tedious; while seeing the formulas is nice, checking the axioms isn't (to my taste) especially enlightening.

Proof 2: (Abstract.) “Bialgebra” = “comonoid in ($k$-Alg,$\otimes$)”. We know $k[x]$ is the free $k$-algebra on one generator, so there's a natural isomorphism $\mathrm{Hom}(k[x],A) \cong A$, for any $k$-algebra $A$. So $\mathrm{Hom}(k[x],A)$ is naturally an algebra — so it has two natural monoid structures, + and $\cdot$, and under + it's moreover a group. By Yoneda, these must correspond to two comonoid structures on $k[x]$, and the one corresponding to + must be Hopf!

Now, what I really like about this proof is that it still connects closely to the computations. By the way that the Yoneda lemma works, you can read off what the two coalgebra structures actually are; but now you don't have to check the axioms, since you already know they hold! Also, you now know there'll be a “co-distributive law” connecting the two, which you might never have thought of just from the first approach… And also, this gives a way of looking for bialgebra structures on other algebras: look at what they classify/represent!

This shows up, I think, a lot of the power of abstract approaches. They put formulas and calculations into a bigger picture; they can help you do interesting calculations, while letting you skip tedious ones; and they can suggest calculations you might not have thought of doing otherwise. But (as you can probably guess from that) I love calculation too: I wouldn't want either without the other. If abstract nonsense is the garden, concrete computations are the flowers.

To my taste, this is actually the best example yet.
–
Jacques CaretteJul 2 '10 at 12:25

5

If you do the "computational" proof nicely enough, it is in fact exactly the same as the other one: it would be silly, rather than "computational", to prove, say, coassocativity by actually checking that the required identity it holds for all elements in $k[x]$.
–
Mariano Suárez-Alvarez♦Jul 15 '10 at 16:20

One striking example that comes to mind is Nathan Jacobson's proof that rings satisfying the identity $X^m = X$ are commutative. This is model-theoretic and proceeds by a certain type of factorization which reduces the problem to the (subdirectly) irreducible factors of the variety. These turn out to be certain finite fields, which are commutative, as desired. By (Birkhoff) completeness there must also exist a purely equational proof (in the language of rings) but even for small $m$ this is notoriously difficult, e.g. $m = 3$ is often posed as a difficult exercise. It's only recently that such a general non-model-theoretic equational proof was discovered by John Lawrence (as Stan Burris informed me). I don't
know if it has been published yet, but see their earlier work [1]

So here, by "higher-order" conceptual structural reasoning, one is able to escape the confines of first-order equational logic and give a more conceptual proof than the brute-force equational proofs - arguments so devoid of intuition that they can been discovered by an automatic theorem prover.

Jacobson's Theorem says that the subset $S$ of $\mathbb Z[X]$ formed by the polynomials $X^n-X$, $n > 1$, has the following property. If, for every $a$ in any given ring $A$, there is an $f$ in $S$ such that $f(a)=0$, then $A$ is commutative. Is $S\cup-S$ maximal for this property?
–
Pierre-Yves GaillardJul 2 '10 at 16:31

The first proof I ever saw of the orthogonality relations for characters of finite groups was computational: it did a lot of matrix computations and manipulations of sums, which I didn't like at all. There is a much more conceptual proof which begins by observing that Schur's lemma is equivalent to the claim that

$$\text{dim Hom}(A, B) = \delta_{ab}$$

for irreducible representations $A, B$, where $\text{Hom}$ denotes the set of $G$-module homomorphisms. One then observes that $\textbf{Hom}(A, B) = A^{*} \otimes B$ is itself a $G$-module and $\text{Hom}$ is precisely the submodule consisting of the copies of the trivial representation. Finally, the projection from $\textbf{Hom}$ to $\text{Hom}$ can be written

$$v \mapsto \frac{1}{|G|} \sum_{g \in G} gv$$

and the trace of a projection is the dimension of its image.

I particularly like this proof because the statement of the orthogonality relations is concrete and not abstract, but this proof shows exactly where the abstract content (Schur's lemma, Maschke's theorem) is made concrete (the trace computation). It also highlights the value of viewing the category of $G$-modules as an algebraic object in and of itself: a symmetric monoidal category with duals.

In addition, this interpretation of Schur's lemma suggests that $\text{Hom}(A, B)$ behaves like a categorification of the inner product in a Hilbert space, where the contravariant/covariant distinction between $A, B$ corresponds to the conjugate-linear/linear distinction between the first and second entries of an inner product. This leads to 2-Hilbert spaces and is a basic motivation for the term "adjoint" in category theory, as explained for example by John Baez here. It is also related to quantum mechanics, where one thinks of the inner product as describing the amplitude of a transition between two states occurs and of $\text{Hom}(A, B)$ as describing the way those transitions occur. John Baez explains related ideas here.

Arguments by mathematical induction seem to provide an entire class of examples of the phenomenon, where computation is replaced by a higher level of reasoning.

With induction, one uses a comparatively abstract understanding of how a property propogates from smaller instances to larger instances, in order to arrive at a fuller understanding of the property in particular cases, without need for explicit calculation. Thus, one can see that a particular finite graph or group or whatever kind of structure has a property, not by calculating it in that instance, but by an abstract inductive argument, on size or degree or rank or whatever. A complex graph-theoretic calculation is avoided by understanding what happens in general when a point is deleted.

And there are, of course, extremely concrete elementary instances. We all know, for example, how to use
induction to prove that $1+2+\cdots+n=n(n+1)/2$. Thus, the
comparatively abstract inductive argument predicts definite
values for concrete sums $1+2+\cdots+105$. Similarly, we often understand the iterates of a function $f^n(x)$ without calculating them, or the powers of a matrix $A^n$, or the successive derivatives of a function, all without calculation, by understanding the inductive relationship in effect at each step.

Surely mathematics is covered with dozens or hundreds of similar examples, of every degree of complexity and every level of abstraction.

Some of the prettiest examples of Dedekind's structuralism arise from revisiting proofs in elementary number theory from a highbrow viewpoint, e.g. by reformulating them after noticing hidden structure (ideals, modules, etc). A striking example of such is the generalization and unification of elementary irrationality proofs of n'th roots by way of Dedekind's notion of conductor ideal. This gem seems to be little-known (even to some number theorists, e.g. Estermann and Niven). Since I've already explained this at length elsewhere I'll simply link [1] to it.

At first glance the various "elementary" proofs seem to be magically pulled out of a hat since the crucial structure of the conductor ideal is obfuscated by the descent "calculations" of various lemmas (that have all been inlined vs. abstracted out). However, once one abstracts out the hidden innate structure the proof becomes a striking one-liner: simply remark that in a PID a conductor ideal is principal so cancelable, thus PIDs are integrally closed. Here, the complexity of the calculations verifying the descent (induction) etc are abstracted out and tidily encapsulated once-and-for-all in the lemma that Euclidean domains are PIDs. Following Dedekind's ground-breaking insight, we recognize in many number-theoretical contexts the innate structure of an ideal, and we exploit that structure whenever possible. For much further detail and discussion see all of my posts in the thread 1 (click on the thread's title/subject at the top of the frame to see a threaded view in the Google Groups usenet web interface)

When I teach such topics I emphasize that one should always look for "hidden ideals" and other obfuscated innate structure. Alas, too many students cannot resist the urge to dive in and "calculate" before pursuing conceptual investigations. It was such methodological principles that led Dedekind to discover most all of the fundamental algebraic structures. Nowadays we often take for granted such structural abstractions and methodology. But it was certainly a nontrivial task to discover these in the rarefied mathematical atmosphere of Dedekind's day (and it remains so even nowadays for students when first learning such topics). Emmy Noether wasn't joking when she said "it's all already in Dedekind". It deserves emphasis that this remark also remains true for methodological principles.

I don't think the proof of irrationality by descent is non-conceptual, since it is illustrating descent itself as a technique which is worthwhile in other settings, not all of which have conductors lying around.
–
KConradJul 15 '10 at 23:09

2

But here I'm concerned with the teaching of number theory, not induction. As such, deliberately obfuscating key innate structure such as an ideal is poor pedagogy. Do you think it would be pedagogically wise to present only the Lindemann-Zermelo proof of unique factorization in a number theory course?
–
Bill DubuqueJul 16 '10 at 2:25

A beautiful classical example from Functional Analysis is the Hausdorff moment problem: characterize the sequences
$m:=(m_0,m_1,\dots)$ of real numbers that are moments of some positive, finite Borel measure on the unit interval $I:=[0,1]$:
$$m_k=\int_I x^kd\mu(x).$$
A necessary condition immediately comes from $\int_I x^{\ j}(1-x)^{\ k} d\mu(x)\geq0$, and is expressed saying that $m$ has to be a "completely monotone" sequence, that is
$$(I-S)^k m\ge0,$$
where $S$ is the shift operator acting on sequences (in other words, the $k$-th discrete difference of $m$ has the sign of $(-1)^k$: $m$ is positive, decreasing, convex,...). The nontrivial fact is that this is also a sufficient condition, thus caracterizing the sequences of moments. Moreover, the measure is then unique.

I'll quote two proofs, both very nice. The first is close to the original one by Hausdorff; the second is a consequence of the Choquet's theorem.

Proof I, with computation (skipped). Bernstein polynomials give a sequence of linear positive operators strongly convergent to the identity
$$B_n:C^0(I)\to C^0(I).$$
Therefore the transpose operators $$B_n ^*:C^0(I)^ *\to C^0(I)^ *$$ give a sequence of operators weakly convergent to the identity. If you write down what is $B_n^ *(\mu)$ for a Radon measure $\mu\in C^0(I)^ *$ you'll observe that it is a linear combinations of Dirac measures located at the points $\{k/n\}_{0\leq k\leq n}$, and with coefficients only depending on the moments of $\mu$. This gives a uniqueness result and a heuristic argument: if $m$ is a sequence of moments for some measure $\mu$, then $\mu$ can be reconstructed by its moments as a weak* limit of discrete measures $\mu_n:=B_n^*(\mu)$. This observation leads to a constructive solution of the problem. Indeed, given a completely monotone sequence $m$, consider the corresponding sequence of measures $\mu_n$ suggested by the experssion of $B_n^*(\mu)$ in terms of the $(m_k)$. Due to the assumption of complete monotoniticy they turns out to be positive measures, and with some more computations one shows that they converges weakly* to a measure $\mu$ with moment's sequence $m$.

Proof II, no or little computation. Completely monotone sequences with $m_0=1$ are a closed convex, thus weakly* compact and metrizable subset $M$ of $l^\infty$. A one-line, smart computation shows that the extremal points of $M$ are exactly the exponential sequences, $m^{ (t)}:=(1,t,t^2,\dots)$, for $0\leq t \leq1$ (these turn out to be the moments of Dirac measures in points of $I$, of course). By the Choquet's theorem, for any given $m\in M$ there exists a probability measure on $\mathrm{ex}(M),$ that we identify with $I,$ such that $m=\int_I m^{ (t) } d\mu(t).$ But this exactly means $m_k=\int_I t^{\ k} d \mu(t)$ for all $k\in\mathbb{N}.$

A wonderful example is the proof of the Poincare Lemma I sketch here, as compared to the proof in e.g. Spivak's Calculus on Manifolds. The latter is extremely computational and, IIRC, not illuminating; it proves the de Rham cohomology of a star-shaped domain vanishes. The former proof shows (the stronger result) that the de Rham complex $\Lambda_{DR}(M)$ is null-homotopic for $M$ contractible; while this does involve some computation, it is very simple and conceptual. The proof is about half a page long in total, and could probably be shortened. It was shown to me by Professor Dennis Gaitsgory; I haven't seen it elsewhere, though I'm sure it is in the literature.

You can skip to the end of the paper (page 26) to see the proof; much of it is aimed at an undergraduate audience that has not yet seen any homological algebra, or even more conceptual linear algebra.

Essentially, the proof works by 1) Noting that the de Rham complex construction is functorial, via pullback of differential forms; 2) Noting that a homotopy of maps $M\to N$ induces a homotopy of maps of chain complexes; and 3) Noting that for $M$ contractible, $\operatorname{id}_M$ is homotopic to a constant map, and thus the pullback via $\operatorname{id}_M$ is both zero and the identity on cohomology.

@Daniel: I looked briefly at your article and it seems very nice. I did notice that there are some missing left parentheses, especially on page 3. One of the merits of online journals is that it is very easy to correct typos like this -- you might want to contact the editors in this regard.
–
Pete L. ClarkJul 2 '10 at 7:35

@Pete: Thanks! There are actually also one or two substantive errors as well, as I wrote this in my first year doing "real" mathematics--unfortunately, the journal seems to be defunct. I do wish I could rewrite one or two portions to reflect my current understanding of the subject.
–
Daniel LittJul 2 '10 at 12:15

@Daniel: someone in or around Harvard must be maintaining the site. In my experience (I was a grad student there, a little before your time) the faculty and staff there are quite helpful, especially if you show a little proactivity. It might be worth sending an email to, e.g. Dennis Gaitsgory.
–
Pete L. ClarkJul 2 '10 at 16:10

1

@Pete: That's a good call; and I should rewrite the paper and post it somewhere else even if I can't get it replaced there. The argument for step (2) is quite beautiful and deserves better motivation than I give it. And Professor Gaitsgory is great -- I all but one course he offered during my four years there. It's also worth reading this brief and hilarious note (thehcmr.org/issue1_1/gaitsgory.pdf) that he wrote for the same journal.
–
Daniel LittJul 2 '10 at 17:42

@Daniel: the link to your note appears to be defunct now. Perhaps you could repost this article elsewhere?
–
Charles RezkDec 10 '10 at 19:39

In computability theory, it is often necessary to prove some particular function is a "computable function". Until the 1960s, this was most commonly done by actually demonstrating a formal algorithm for the function in a kind of pseudocode, or giving a set of recursion equations. Needless to say this style of presentation was heavily symbolic and conveyed little intuition about why the function was defined the way it was.

The more modern style of presentation relies instead on having a good sense of the closure properties of computable functions, and identifying a large class of basic computable functions (the "primitive recursive functions"). So one can simply explain how to obtain the function at hand from primitive recursive functions using operations that preserve computability. This style of proof allows for much more detailed exposition of the intuition behind the definition of a computable function. Everyone in the field understands how, in principle, to take this kind of proof and obtain a formal algorithm, if it is ever necessary.

An even more succinct way to prove that a function is computable is to give an informal description of its computation and then appeal to Church's thesis. Maybe this is frowned upon today, but it was the style of some classic papers of the 1940s and 1950s (Post and Friedberg).
–
John StillwellJul 2 '10 at 0:33

2

@John Stillwell: Yes, that is what I am referring to. But "informal" though it may be, it's just as rigorous as any other mathematical proof. The only reason it was ever considered "informal" is by comparison with previous work in which authors actually wrote down formal definitions of every function they defined.
–
Carl MummertJul 2 '10 at 0:56

1

People have differing standards here. I once wrote a paper in which I needed to prove that a certain language was regular (en route to a result about group theory). I gave an informal description of the program accepting it which made it clear that it only needed a universally bounded amount of memory to run no matter what the input size. However, the referee insisted that I write out a detailed description of the state graph, etc. Actually writing out the details was a nightmare...
–
Andy PutmanJul 2 '10 at 2:16

1

John, showing that a function is computable via appeal to the Church-Turing thesis is absolutely essential to current research in computability theory. No-one would dream of giving fully formalised constructions in their papers; it would take thousands of hours, run to ridiculous numbers of pages and be utterly unreadable. It's definitely not "frowned upon today" by people working in the area.
–
Phil EllisonJul 3 '10 at 21:49

An example of a slightly different kind -- not eliminating all calculation, but
showing that "all calculations are easy" -- is Dehn's algorithm in
combinatorial group theory. Dehn showed, using the combinatorics of
hyperbolic tessellations, that the word problem for surface groups is solvable
using only obvious word reductions.

In this case, calculation is avoided not so much by abstraction, but by
thinking geometrically rather than algebraically.

What I find amazing about this is that the "first calculation, then conception" thing happened in reverse. Namely, Dehn gave a beautiful geometric argument, and then combinatorial group theorists spent the next 50 years replacing the beautiful geometry with messy algebra. It took Gromov to fix things...
–
Andy PutmanJul 2 '10 at 2:11

1

Andy: Although there is some truth to that, and some people may even believe it, surely it is just another legend made up in order to simplify the messy history: for example, what do you make of the work of Coxeter and Tits?
–
Victor ProtsakJul 2 '10 at 2:33

2

Observe the modifier "combinatorial" before group theory. Certainly I am not claiming that there was no interaction between group theory and geometry in this period! I am referring more modestly to the school of group theory exemplified by the classic books of Magnus-Karass-Solitar and Lyndon-Schupp. Much of this was directly inspired by Dehn -- for instance, small cancellation theory was in part an attempt to find algebraic analogues for Dehn's geometric arguments. Indeed, Magnus himself was a student of Dehn. Coxeter and Tits belonged to a rather different tradition.
–
Andy PutmanJul 2 '10 at 3:00

Well, I meant it rather literally, that they used the same method (going back to Poincare) to solve an analogous problem in Coxeter groups. If you want to say that specifically MKS (1966) and LS (1977) were computational and influential, it's hard to disagree on both counts, but then your comment loses much of its force, especially since Gromov also wasn't part of the same tradition...
–
Victor ProtsakJul 2 '10 at 19:57

1

It's true that some of the small cancellation conditions allow flats. However, Dehn's solution doesn't! In fact, it is a theorem that a group is Gromov hyperbolic if and only if it has a presentation such that the "Dehn algorithm" (namely, if you see more than half a relation, then replace it with the other half) solves the word problem.
–
Andy PutmanJul 7 '10 at 19:38

When I was a student, I once watched a professor (a famous and brilliant mathematician) spend a whole class period proving that the functor $M\otimes-$ is right exact. (This was in the context of modules over a commutative ring.) He was working from the generators-and-relations definition of the tensor product. With what I'd consider the "right" definition of $M\otimes-$, as the left adjoint of a Hom functor, the proof becomes trivial: Left adjoints preserve colimits, in particular coequalizers. Since the functors in question are additive, $M\otimes-$ also preserves 0 and therefore preserves cokernels. And that's what right-exactness means.

This isn't quite fair : to give that proof, you have to define a fair amount of categorical terminology and prove several categorical lemmas. I don't think it really is any easier...
–
Andy PutmanJul 16 '10 at 3:40

2

I don't see anyone objecting to the Yoneda lemma on the same grounds.
–
Victor ProtsakJul 17 '10 at 1:28

2

Doesn't this also depend how exactness was defined? in the sense that in class it might have been defined in terms of certain things being surjections (rather than epimorphisms or cokernels)? I bring this up because people in Banach spaces/algebras tend to talk about exact sequences when they/we mean "exact after applying the forgetful functor vector spaces", and this is not the same as the categorical ker=coker formulation...
–
Yemon ChoiJul 18 '10 at 7:03

1

I'm pretty sure that, by this point in the course, we knew that surjections of modules are the same thing, up to isomorphism, as quotients (i.e., cokernels). But, considering that this happened more than 40 years ago, I can't absolutely guarantee that we were taught basic things like that before we ever saw a tensor product.
–
Andreas BlassJul 18 '10 at 21:45

Many general statements in algebraic geometry can be proved via direct tedious verification or by abstract thought. In fact, the notions of abstract algebraic variety and scheme is created precisely for this purpose. I will illustrate this with an example of showing that the elliptic curve is a group.

Method 1: Define an elliptic curve over a field as a curve in the Weierstrass form with nonzero determinant. Upon this define the addition and inverse laws using the chord-and-tangent process, obtaining algebraic expressions. To show that the elliptic curve is a group, you have to show the addition is associative. Then do a very tedious verification of the identities.

Method 2: Another way is to use elliptic functions to prove the identity in the complex case. Since the algebraic group law holds true over the complex numbers, it is satisfied by an infinite number of algebraically independent solutions, and therefore the group law must be true in universality, over any field whatsoever. Of course this needs to be made precise with Lefschetz principle .

Method 3:(My favorite) Later algebraic geometry developed and it was possible to prove statements without relying on the Lefschetz principle. For instance, the group law on elliptic curve is always a consequence of the Riemann-Roch, which was proved in its full power by Weil, Hirzebruch and Grothendieck. But this might be seen as a sledgehammer by some; in any case it is a remarkable sledgehammer.

More generally, the algebraic theory of abelian varieties (using line bundles and Riemann-Roch, as laid down in Mumford's book) is a conceptual reworking of the theory of theta series over the complex numbers, which had a more computational taste.
–
Simon Pepin LehalleurAug 6 '10 at 6:19

I think that Gauss Theorem on constructible polygons fit this cathegory.

For more than 2000 years the actual construction only lead to 4 classes: $2^n; 2^n\cdot 3; 2^n \cdot 5; 2^n \cdot 15$.

Gauss' abstract aproach solved the problem. The interesting case $n=17$ becomes easy to understand, and easy to construct once one understands the abstract approach, but hard to attack otherwise. $n=257$ and esspecially $n=65537$ and the ones derived from these are the perfect examples of easy abstract proof vs. extremelly complicated calculations.

All I see here are calculations. It only changed the nature of the object which you calculate with and its relation to the the final goal. For this reason I still can not make clear sense of the question.