For example, I find the first group isomorphism theorem to be vastly more opaque when presented in terms of commutative diagrams and I've had similar experiences with other elementary results being expressed in terms of exact sequences. What are the benefits that I am not seeing?

I downvoted this question because it answers itsself as soon as you work some time with diagrams... the answers below illustrate this.
–
Martin BrandenburgApr 18 '10 at 22:06

11

I think that downvoting this is harsh - this is like someone answering a question with "just go and work some more at it". It seems a perfectly valid question for someone who has not had much experience with such techniques and is interested to hear from people who have both learned this stuff and taught it.
–
Yemon ChoiApr 19 '10 at 3:40

7

I should add that while I am now fairly accustomed to messing around with exact sequences and using them to chase things round diagrams, this was not the case 10 years ago when I was an undergraduate. So I have some sympathy for both sides that are being represented here. Several of my colleagues, who as functional analysts aren't opposed to algebraic techniques, still feel happier with a non-diagrammatic perspective: while I have different tastes from them, I would not dismiss their view by saying this "answers itself as soon as you work some time with diagrams."
–
Yemon ChoiApr 19 '10 at 3:45

7 Answers
7

Holy cow, go beyond the first homomorphism theorem! For example, if you have a long exact sequence of vector spaces and linear maps
$$
0 \rightarrow V_1 \rightarrow V_2 \rightarrow \cdots \rightarrow V_n \rightarrow 0
$$
then exactness implies that the alternating sum of the dimensions is 0.
This generalizes the "rank-nullity theorem" that $\dim(V/W) = \dim V - \dim W$, which is the special case of $0 \rightarrow W \rightarrow V \rightarrow V/W \rightarrow 0$.
Replace vector spaces and linear maps by finite abelian groups and group homomorphisms and instead you find the alternating product of the sizes of the groups has to be 1.

The purpose of this general machinery is not the small cases like the first homomorphism theorem. Exact sequences and commutative diagrams are the only way to think about or formulate large chunks of modern mathematics. For instance, you need commutative diagrams to make sense of universal mapping properties (which is the way many concepts are defined or at least most clearly understood) and to understand the opening scene in the movie "It's My Turn".

Here is a nice exercise. When $a$ and $b$ are relatively prime,
$\varphi(ab) = \varphi(a)\varphi(b)$, where $\varphi(n)$ is Euler's $\varphi$-function from number theory. Question: Is there a formula for $\varphi(ab)$ in terms of $\varphi(a)$ and $\varphi(b)$ when $(a,b) > 1$? Yes:
$$
\varphi(ab) = \varphi(a)\varphi(b)\frac{(a,b)}{\varphi((a,b))}.
$$
You could prove that by the formula for $\varphi(n)$ in terms of prime factorizations, but it wouldn't really explain what is going on because it doesn't provide any meaning to the formula. That's kind of like the proofs by induction which don't really give any insight into what is going on. But it turns out there is a nice 4-term short exact sequence of abelian groups (involving units groups mod $a$, mod $b$, and mod $ab$) such that, when you apply the above "alternating product is 1" result, the general $\varphi$-formula above falls right out. Searching for an explanation of that formula in terms of exact sequences forces you to try to really figure out conceptually what is going on in the formula.

"You could prove that by the formula for phi(n) in terms of prime factorizations, but it wouldn't really explain what is going on because it doesn't provide any meaning to the formula." I disagree with this. I think the combinatoric proof of this formula makes it very clear, whereas more abstract formulas of this type should be proven with diagrams.
–
Martin BrandenburgApr 18 '10 at 22:10

3

Martin, what argument do you have in mind? A combinatorial argument sounds like something other than a proof by algebra using the formula with prime factorizations.
–
KConradApr 18 '10 at 22:40

@KConrad: as for the "nice 4-term" sequence which you mention, is it $$0\rightarrow \ker\rightarrow U(\mathbf{Z}/ab\mathbf{Z})\rightarrow U(\mathbf{Z}/a\mathbf{Z})\times U(\mathbf{Z}/b\mathbf{Z})\rightarrow \operatorname{Coker}\rightarrow 0$$? +1 for mentioning this very nice proof
–
KarlJun 8 '14 at 22:19

Yes, but you have to make explicit what the kernel and cokernel really are (the map in the middle indeed is the standard map). Also, since these are multiplicative groups throughout, it looks better to use 1 instead of 0 on both ends of the exact sequence.
–
KConradJun 9 '14 at 0:40

@KConrad I guess psychologically the kernel has to be $\mathbf{Z}/(a,b)\mathbf{Z}$ and the cokernel $U(\mathbf{Z}/(a,b)\mathbf{Z})$? I agree that I should have written $1$, somehow I am used to modules, hence additive notation.
–
KarlJun 9 '14 at 11:10

If you are asking why very elementary results like the first isomorphism theorem are phrased
in the language of exact sequences/commutative diagrams (rather than why this language is used at all), then there are (at least) two answers: (1) for those who are used to using this language, they frequently think about even those elementary results in terms of it, and so it is natural to write them in that language; (2) we want to train students to learn this language, and have to start somewhere, so we begin by taking elementary results that can be understood in another way, such as the first isomorphism theorem, and then rewrite them in this language for pedagogical purposes.

If you are asking why people use this language at all (which is to say, why are there many people to whom (1) above applies, and why do we want to engage in the educational practice labelled (2) above), then Keith Conrad gives a pretty good answer.

At a slightly broader level of generality, one might cite the old saying "a picture is worth a thousand words", and note that a well-chosen diagram or exact sequence can convey a lot of mathematical information in a succinct and intuitive way (the intuition coming once you have some familiarity with this way of thinking). We have a lot of mathematics to remember, and are always looking for ways to compress our descriptions of things without losing information or becoming unclear. Well-chosen definitions and terminology are one way this is achieved; well-drawn diagrams and exact sequences are another.

Finally, one could note that contemplating diagrams appeals (however slightly) to geometric modes of reasoning. Typically, any method which allows one to import some kind of geometric reasoning into algebra is welcome, since it brings less typically algebraic ways of thinking to bear on algebraic problems.

Diagrams also introduce a whole host of other issues with the proof (like assuming that the diagram is commutative). Serre discussed this in his hilarious lecture on how to write mathematics poorly.
–
Harry GindiApr 18 '10 at 4:53

3

+1. I think this is a very nice answer which goes well beyond a knee-jerk "how can you do without these things?" response.
–
Yemon ChoiApr 19 '10 at 3:47

1

This answer is for me the most compelling. I think the bigger question is "what accounts for the power of 'arrow-theoretic' notation?", and it is reasonable here to suppose that the brain is more efficiently engaged when the eyes have room to roam over graphs in two dimensions. Expressing the content of a non-trivial commutative diagram in traditional one-dimensional equations can be done, but it usually makes matters much more opaque (and just try to do it for 2-categories -- can be done, but...).
–
Todd Trimble♦Jun 12 '11 at 12:54

Putting a proof in terms of commutating diagrams allow you to repeat the same proof in different categories. Maybe you proved something about covering spaces. Reverse the arrows and you can get something about field extensions.

Commutating diagrams allow you to separate which part of the proof is purely category-theoretic and which part of the proof concernings the particular objects and morphisms in the category in question.

The main advantage that I see comes from generality: the results you're referring to hold for way more types of objects than just abelian groups of modules (for example, chain complexes or sheaves) and the definitions that refer to elements don't make any sense in those contexts. For example, there's a "first isomorphism theorem" for sheaves, but it's more difficult to express it in terms of elements because the definitions of "surjective" and "image" in that category are a little funny.

That said, I had a very similar experience when I first started interacting with this sort of language, and a lot of the time whenever I see a statement that involves a diagram, I pretend that everything in the diagram is a module and think about what it means for elements. Eventually you pick up heuristics for all the concepts involved, and you can learn to switch back and forth between the two descriptions. For example, a short exact sequence $0\to A\to B\to C\to 0$ of abelian groups expresses the fact that $A$ is a subgroup of $B$ and $B/A\cong C$.

My answer is not so different from Emerton's, but it's mainly an answer to the poster rather than just the question.

I think often it's often because of taste and experience that people use commutative diagrams. I find in my own research I'm often checking that diagrams commute or checking that sequences are exact, by computing with the actual underlying formulas. However I find it easier to state (and remember) things with the diagrams. In terms of intuition diagrams give a different point of view and so may lead to different ideas and understandings of things (which is a good thing). I have you a bigger commutative diagram (say like a cube) saying the diagram is commutative, is easier to understand and phrase. Rewording this in terms of equations would probably make it less clear. A very simple diagram is $A\rightarrow B$, do you feel this is a useful diagram, or would you prefer to have maps only described by equations? That might be hard if you were dealing with abelian categories.

In terms of the example of tensor products, I also can understand the actual object better as a more concrete thing, but when you want to construct maps out of it, sometimes it's easier to use its "universal property" (ie that bilinear maps out of the product factor uniquely through it). Once I was siting in on a course on representation theory by a colleague. He was checking that some map existed (and was unique) by checking element by element, but actually all he really had to do was to check that some other map was bilinear. In this case it would have been much easier to use the universal property of the tensor product. On the other hand I do agree that as an object itself it's easier to understand the tensor product as linear combinations modulo some equivalence relation.

Any time you try to give a map out of a quotient type object by defining the map on equivalence relations you also have to check that your map is well defined. Checking that maps out of tensor products are well defined without using the universal mapping property is TERRIBLE for all but the simplest cases.
–
Steven GubkinApr 19 '10 at 13:31

There are at least two advantages of the approach via universal morphisms. One is that it relates to the idea that one studies a particular object by its relation to all other objects.

The other is that it allows for generalisations: an example is that from tensor products of abelian groups to the nonabelian tensor product of groups which act on each other.

In fact this suggest another advantage: replace a complicated map by a simpler one, in this case a morphism. In particular a morphism of abelian groups has a kernel, whereas a biadditive map does not have that notion.

However the isomorphism theorems in group theory arose from the need to systematise facts on particular examples, and these are the, or a, main delight of basic group theory. I confess I did not get round to doing calculations in group theory till I was 50 - I had previously missed out on this.

I was asking why are elementary theorems phrased in this language and why is this language used at all. The question was caused by my nervousness when this language appears and by the following excerpt from a review of J. S. Milne's Etale cohomology by Spencer Bloch:

... in leafing through the collected works of Weil (who in some sense started it all) I am unable to find a single exact sequence or commutative diagram. The reader is invited to compare this with the work under review (or indeed with any of the published work of either author or reviewer). It would be interesting to see more clearly the shift in mathematical philosophy which must underlie this shift in notation.

We can consider the shift a sign of positive progress. Other signs of positive progress include the absence of creepy anatomical remarks in recent issues of the Bulletin of the AMS (cf. that review you quoted).
–
S. Carnahan♦Apr 18 '10 at 16:33

Can you point to some examples where you are seeing this point of view introduced without an apparent benefit? I would say that seeing homomorphisms explained from the viewpoint of commutative diagrams is nice, even though it's very elementary. It provides a nice visual image of the purely algebraic equation f(xy) = f(x)f(y), which doesn't really seem so vivid in the equation itself.
–
KConradApr 18 '10 at 18:58

2

The absolute value sign was not invented until the 1840s by Weierstrass (cf. Wikipedia page on absolute value). I find that late date rather incredible, although certainly you can get by without it for discussing analysis on the real line. Is the fact that earlier mathematicians didn't have the absolute value notation a sign that we shouldn't use it even in simple situations?
–
KConradApr 18 '10 at 19:01

3

@KConrad: I think in terms of symbols rather than visually so that may be another reason for my difficulties. Another example for me is the tensor product: taking linear combinations and then defining an equivalence relation I can understand; but every bilinear map can be factored uniquely is much less clear to me. I by no means meant to criticize the language, I was trying to gain some understanding of what it was doing. From the answers, I gather that I haven't seen sufficiently advanced mathematics to appreciate this view.
–
teilApr 18 '10 at 23:41

2

It takes effort, but you need to aspire to understand the tensor product in terms of its universal mapping property. You may not need that to do certain calculations inside a particular tensor product space, but as soon as you try to relate one tensor product space to something else (specifically, writing down a linear map out of a tensor product space) things are much simpler if you are comfortable using the universal mapping property. See math.uconn.edu/~kconrad/blurbs/linmultialg/tensorprod.pdf
–
KConradApr 19 '10 at 14:02