One way to define the algebra of differential forms $\Omega(M)$ on a smooth manifold $M$ (as explained by John Baez's week287) is as the exterior algebra of the dual of the module of derivations on the algebra $C^{\infty}(M)$ of smooth functions $M \to \mathbb{R}$. Given that derivations are vector fields, 1-forms send vector fields to smooth functions, and some handwaving about area elements suggests that k-forms should be built from 1-forms in an anticommutative fashion, I am almost willing to accept this definition as properly motivated.

One can now define the exterior derivative $d : \Omega(M) \to \Omega(M)$ by defining $d(f dg_1\ \dots\ dg_k) = df\ dg_1\ \dots\ dg_k$ and extending by linearity. I am almost willing to accept this definition as properly motivated as well.

Now, the exterior derivative (together with the Hodge star and some fiddling) generalizes the three main operators of multivariable calculus: the divergence, the gradient, and the curl. My intuition about the definitions and properties of these operators comes mostly from basic E&M, and when I think about the special cases of Stokes' theorem for div, grad, and curl, I think about the "physicist's proofs." What I'm not sure how to do, though, is to relate this down-to-earth context with the high-concept algebraic context described above.

Question: How do I see conceptually that differential forms and the exterior derivative, as defined above, naturally have physical interpretations generalizing the "naive" physical interpretations of the divergence, the gradient, and the curl? (By "conceptually" I mean that it is very unsatisfying just to write down the definitions and compute.) And how do I gain physical intuition for the generalized Stokes' theorem?

(An answer in the form of a textbook that pays special attention to the relationship between the abstract stuff and the physical intuition would be fantastic.)

Really? I would like to award reputation for good answers and I am not necessarily just looking for a list of recommendations; perhaps someone has a clear enough intuition that it can be described in a paragraph or two.
–
Qiaochu YuanJan 3 '10 at 11:30

8

Have you seen From Calculus to Cohomology by Madsden and Tornehave? It's not really about physical intuition (which is why I'm making this a comment), but it might be helpful.
–
Akhil MathewJan 3 '10 at 15:28

5

I still think this should be community wiki because it's a sorted list. I didn't like an answer, and I'd like to vote it down, but not the user.
–
Harry GindiJan 3 '10 at 15:46

A 1-form is a function which grows proportionally to how fast you are moving. Thus it doesn't matter how you parametrize the curve you are moving on - you either end up integrating a smaller function for a longer period of time, or a bigger function for a shorter period of time. This is why you can't integrate functions on manifolds - they have no intrinsic "unit speeds", because there are many choices of local coordinates - but you can still integrate differential forms. k-forms just generalize this to higher dimensions.
–
Steven GubkinJan 27 '11 at 20:25

16 Answers
16

Here's a sketch of the relation between div-grad-curl and the de Rham complex, in case you might find it useful.

The first thing to realise is that the div-grad-curl story is inextricably linked to calculus in a three-dimensional euclidean space. This is not surprising if you consider that this stuff used to go by the name of "vector calculus" at a time when a physicist's definition of a vector was "a quantity with both magnitude and direction". Hence the inner product is essential part of the baggage as is the three-dimensionality (in the guise of the cross product of vectors).

The beauty of this is that, first of all, the two vector calculus identities $\mathrm{div} \circ \mathrm{curl} = 0$ and $\mathrm{curl} \circ \mathrm{grad} = 0$ are now subsumed simply in $d^2 = 0$, and that whereas div, grad, curl are trapped in three-dimensional euclidean space, the de Rham complex exists in any differentiable manifold without any extra structure. We teach the language of differential forms to our undergraduates in Edinburgh in their third year and this is one way to motivate it.

As for the integral theorems, I always found Spivak's Calculus on manifolds to be a pretty good book.

Another answer mentioned Gravitation by Misner, Thorne and Wheeler. Personally I found their treatment of differential forms very confusing when I was a student. I'm happier with the idea of a dual vector space than I am with the "milk crates" they draw to illustrate differential forms. Wald's book on General Relativity had, to my mind, a much nicer treatment of this subject.

I have struggled with this question myself, and I couldn't find a perfectly satisfactory answer. In the end, I decided that the definition of a differential form is a rather strange compromise between geometric intuition and algebraic simplicity, and that it cannot be motivated by either of these by itself. Here, by geometric intuition I mean the idea that "differential forms are things that can be integrated" (as in Bachmann's notes), and by algebraic simplicity I mean the idea that they are linear.

The two parts of the definition that make perfect geometric sense are the d operator and the wedge product. The operator d is simply that operator for which Stokes' theorem holds, namely if you integrate d of a n-form over an n+1-dimensional manifold, you get the same thing as if you integrated the form over the n-dimensional boundary.

The wedge product is a bit harder to see geometrically, but it is in fact the proper analogy to the product measure. Here's how it works for one-forms. Suppose you have two one-forms a and b (on a vector space, for simplicity). Think of them as a way of measuring lengths, and suppose you want to measure area. Here's how you do it: pick a vector $\vec v$ such that $a(\vec v) \neq 0$ but $b(\vec v) = 0$ and a vector $\vec w$ s.t. $a(\vec w) = 0$ but $b(\vec w) \neq 0$. Declare the area of the parallelogram determined by $\vec v$ and $\vec w$ to be $a(\vec v) \cdot b(\vec w)$. By linearity, this will determine area of any parallelogram. So, we get a two-form, which is in fact precisely $a \wedge b$.

Now, the part that makes no sense to me geometrically is why the hell differential forms have to be linear. This implies all kinds of things that seem counter-intuitive to me; for example there is always a direction in which a one-form is zero, and so for any one-form you can draw a curve whose "length" with respect to the form is zero. More generally, when I was learning about forms, I was used to measures as those things which we integrate, and I still see no geometric reason as to why measures (and, in particular, areas) are not forms.

However, this does make perfect sense algebraically: we like linear forms, they are simple. For example (according to Bachmann), their linearity is the thing that allows the differential operator d to be defined in such a way that Stokes' theorem holds. Ultimately, however, I think the justification for this are all the short and sweet formulas (e.g. Cartan's formula) that make all kinds of calculations easier, and all depend on this linearity. Also, the crucial magical fact that d-s, wedges, and inner products of differential forms all remain differential forms needs this linearity.

Of course, if we want them to be linear, they will be also signed, and so measures will not be differential forms. To me, this seems as a small sacrifice of geometry for the sake of algebra. Still, I don't believe it's possible to motivate differential forms by algebra alone. In particular, the only way I could explain to myself why take the "Alt" of a product of forms in the definition of the wedge product is the geometric explanation above.

So, I think the motivation and power behind differential forms is that, without wholly belonging to either the algebraic or geometric worlds, they serve as a nice bridge in between. One thing that made me happier about all this is that, once you accept their definition as a given and get used to it, most of the proofs (again, I'm thinking of Cartan's formula) can be understood with the geometric intuition.

Needless to say, if anybody can improve on any of the above, I'll be very grateful to them.

P.S. For the sake of completeness: I think that "inner products" make perfect algebraic sense, but are easy to see geometrically as well.

Of course forms have to be signed (indeed, alternating) objects. Think of the classical Stokes' theorem, the direction of the normal vector and the associated direction of the boundary. If you reverse one you have to reverse the other, and the integrals change sign. As for linearity, think of integration over a very small surface – say, a parallelogram. If you double one side, the integral should double too (asymptotically). Similarly, if the sides are parallel, the integral vanishes. From ω(X,X)=0 and linearity you get the alternating property. Oh, and remember the Jacobian determinant?
–
Harald Hanche-OlsenJan 3 '10 at 22:17

7

IMO the answer to your question as to why we choose forms to be linear goes back to a far simpler observation. That the determinant is the unique alternating multi-linear function on square matrices that takes value $1$ on the identity. This says alternating multi-linear objects measure (signed) volume. A form is just a linear combination of projections followed by determinants, so forms are precisely the objects you need to measure signed volume when you have positive co-dimension.
–
Ryan BudneyJan 4 '10 at 0:18

There are "things you can integrate" more general than differential forms, such as arc length or surface area. A fairly general notion of something you can integrate is a density in the sense of Gelfand. To amplify Harald's point on the importance of Stokes' Theorem, according to this MO question, imposing the linearity condition on a density is equivalent to asking that Stokes' Theorem holds.
–
Tim CampionNov 5 '13 at 14:46

I'm not convinced that this is what you are looking for, Qiaochu, but I think it's worth mentioning anyway.

As someone who has no real sense for "physical intuition," and who -- probably not coincidentally -- hated his multivariable calculus class, I've found (what I've read of) David Bachman's A geometric approach to differential forms to be wonderfully intuitive. Best of all, it's available for free online.

As of 2012, there's a second edition of Bachman's book, available from Springer. It's aimed at a slightly more advanced audience than the original edition and includes an introductory chapter on differential geometry.
–
J WApr 12 '14 at 11:17

To get an intuitive understanding of the Stokes theorem, I recommand the book by Arnol'd on mechanics. It gives a very intuitive definition of the exterior derivative in such a way that the Stokes theorem becomes, heuristically, very easy to grasp.

The notes by Bachman recommended by Harrison Brown look pretty nice to me, but it seems to me that it is possible to clarify what he says even further by focusing on the simplest cases, namely the integral of a "constant function" over the simplest possible domain.

For the integral over an interval, the simplest case consists of a constant function. You can extend this case to the general case by the additive property of an integral and taking limits. But if you want an integral that is independent of the parameterization of the interval, this leads naturally to the idea that you don't want to integrate just a function $f(x)$ but a "1-form" $f(x) dx$.

This generalizes naturally to an integral of a constant function over a line segment sitting in $R^n$. If you want the concept of an integral that is independent of choice of a linear parameterization of the segment, as well as the linear co-ordinates on $R^n$, then this leads to naturally to the fact that what should be integrated is a "dual vector", i.e. a constant $1$-form. In fact, when developing these ideas, I suggest using an abstract real vector space $V$ as the ambient space instead of $R^n$.

When considering higher dimensions, I suggest focusing on linear embeddings of a $k$-dimensional cube and asking what gives a linear co-ordinate independent additive function of flat $k$-cubes embedded in $R^n$. I have not worked out the details myself, but I suspect that this leads naturally to the concept of constant $k$-forms.

My recollection is that there is a book "Advanced Calculus" by Harold Edwards that presents all of this, but I haven't looked at the book in a very long time.

In particular, it is worth noting that the question asked is really about algebra and not analysis. The analysis arises only when you want to extend the definition of an integral to a more general class of functions beyond constant ones.

ADDED LATER:

My answer above does not address the exterior derivative. I will just add a brief comment about this and leave the details to the reader. My view of the exterior derivative is that, once you decide that exterior forms are indeed the natural objects of integration over a domain (but start with cubes!) in Euclidean space, it is the natural co-ordinate-free algebraic consequence of the fundamental theorem of calculus (or, if you insist, Stoke's theorem). That $d^2 = 0$ is the appropriate co-ordinate-free expression of the basic fact that "partials commute".

The main thing I've never understood about differential forms is their "coordinate independence." I know what the general definition of a differentiable manifold is, so I see why it might be interesting there (topology might obstruct the existence of a global coordinate patch), but I don't understand why differential forms are taught for R^n. Since in R^n there is a natural inner product (by which you mean on the tangent space?) and coordinate system, why do we care? In what way are they "coordinate independent"? Is the coordinate change just if you want to change to polar or something?
–
David CorwinJan 5 '10 at 3:25

1

It's true that there is a natural coordinate system on R<sup>n</sup>, but you might well want to use a different one depending on the situation, e.g., polar coordinates for a spherically symmmetric problem. When you change coordinates there is a formula telling you how integrals behave. One way to think of "coordinate independence" of differential forms is just that the way differential forms change under change of coordinates neatly encodes the behaviour of integrals.
–
Joel FineJan 5 '10 at 8:03

3

Heck, a lot of my research was about or on manifolds, and I rarely used differential forms. My thesis was actually about exterior differential systems (systems of equations defined by exterior differential forms), and even there I used very little of the formalism of differential forms! Use differential forms only if the formalism makes your life easier and not harder. Also, for me a lot of things on $R^n$ make a lot more sense and are much easier to work with, when I see that they do not require the use of a global co-ordinate system or inner product.
–
Deane YangJan 5 '10 at 15:07

There is a book that not many physicists I know of seem to like (except mathematical physicists, of course), but that is a true gem in the eyes of mathematicians: I am referring to V. Arnold Mathematical Methods of Classical Mechanics, here on amazon.

In this book, which is in the short list (number 12, to be precise) of my fundamental math book across all math fields, Chapter VIII is entirely devoted to differential forms.

If you read it, you have, I believe, an excellent answer.

One small suggestion to build understanding: DISCRETIZE. Do not think of fancy integrals, simply think that 0-forms are scalars, 1-forms oriented segments, 2-forms oriented areas, and that integration over them is simply sums. Now "prove" Stokes theorem for simple tiny cubes, and notice that the definition of the derivatives of forms is exactly done to keep track of faces.
At the infinitesimal level, it is just book keeping.

If I ever had to teach a basic class on forms, I would do precisely that: discretize first.

+1.The late William Burke's book has sadly been forgotten over the last decade or so as Frankel's and Nakahara's more comprehensive texts have supplanted it. But I believe it should be required reading by both physicists and mathematicians in training.
–
The MathemagicianMar 25 '12 at 8:07

I'm not sure if this point of view is taken up in the many references which are named here, but I'll say something about an "elementary" way to discover the exterior derivative which sounds like ordinary calculus. Let's take on the point of view that a $k$-form is something you integrate over a $k$-dimensional submanifold. If you imagine $k$-dimensional submanifolds as being composed of a $k$-dimensional blanket of little $k$-parallelograms, then this is a geometrically natural point of view since the $k$-form will assign a (small) number to each of these parallelograms. To actually realize a submanifold as such a "blanket" is to give a parameterization. (These parallelograms are oriented; this picture is different from surface integration of scalar functions in Riemannian geometry where one simply imagines some distribution of mass on the manifold and the integral is completely measure-theoretic. There the paralellograms have a positive mass given by the $k$-dimensional volume determined by the inner product.)

In one-variable calculus, when $f$ is a function, $df$ tells you the change in $f$ per small change in its input, and if you integrate it over a curve from $a$ to $b$, it expresses the total change in $f$ from $a$ to $b$. Now, a one form $\eta$ is integrated not over points but rather over curves. Still, you can ask, how does $\int_\gamma\eta$ change when you perturb $\gamma$? Well, if you deform a closed curve $\gamma_a$ into another curve $\gamma_b$, the difference between the integrals over $\gamma_b$ and $\gamma_a$ is some derivative we can call "$d\eta$" integrated over the surface swept between the two.

Picturing the case where $\gamma_a$ and $\gamma_b$ bound an annulus is a good thing to consider here; this interpretation tells you how to orient the boundary of the annulus if you want to think of $\int_\Sigma d\eta = \int_{\gamma_b} \eta - \int_{\gamma_a} \eta$ as being $\int_{\partial \Sigma} \eta$. On the other hand, you can take the point of view that the orientation for $\Sigma$ is determined by the requirement that we start at $\gamma_a$ and go to $\gamma_b$ (much like the case for $df$ of a function). You can then contract the inner circle to a point to recover Stokes' theorem for a disk -- the integral over the inner circle will vanish in the limit by the linearity and continuity of the form (a similar thing will happen in higher dimensions but the linearity is needed for the cancellation over the inner, closed surface).

It's not completely necessary that the curve (or $k$-dimensional submanifold) you deform is closed, but by rule the boundary should remain fixed during the deformation or you will miss out on part of the boundary.

Using a specific example like a square/cube, we can get a coordinate representation for $d\eta$ through the fundamental theorem of calculus. (For $0$ forms, every point is closed, so we did not need to worry about the word "closed" before.)

It is easy to see many properties. For example, let's take $\eta$ to be a $1$-form in $3$-space; then $d^2 \eta$ is clearly $0$. Let $\gamma$ be a circle, and let $\Sigma_a$ and $\Sigma_b$ be the upper and lower hemispheres of a ball $B$ whose equator is $\gamma$. Then $\int_{\Sigma_a} d \eta = \int_\gamma \eta = \int_{\Sigma_b} d \eta$ by Stokes' theorem for a disk. On the other hand, the integral of $d^2\eta$ over the ball $B$ is just $\int_{\Sigma_b} d \eta - \int_{\Sigma_a} d\eta = 0$ because you can sweep out $B$ by deforming $\Sigma_a$ to $\Sigma_b$ with the boundary fixed. Since $\int_B d^2 \eta = 0$ for every ball, $d^2 \eta$ is identically $0$. When you execute this proof for a square, you see that mixed partials commute.

I would like to know if the product rule can easily be seen through this interpretation, but I have not thought enough about it to see it clearly yet.

Baez's book "Gauge Fields, Knots, and Gravity" does a good job of geometrically motivating differential forms in the first section on electromagnetism. Unfortunately, I already was happy with forms when I read it, so it may or may not be what you are looking for. It certainly does have all the right infinitesimal drawings to motivate the definitions, though. It might make a good intuitive complement to whatever abstract resource you choose.

You might also want to skim through parts of Hubbard's "Vector Calculus, Linear Algebra, and Differential Forms: a unified approach". This is used at Cornell as a textbook for a 2 semester calc/linear algebra/analysis sequence. About half of the second semester is spent developing and applying differential forms. There is somewhat less intuitive explanation, but some very good motivation for why we should define things the way we do.

I'd second Hubbard's book. He gives a very natural definition of the exterior derivative of a differential form, which is what it sounds like you're looking for Yuan. IMO it's a significant step-up compared to texts like Spivak or Bachman.
–
Ryan BudneyJan 3 '10 at 18:26

Faraday-Schouten pictograms of the electromagnetic field in 3-dimensional space. The images of 1-forms are represented by two neighboring planes. The nearer the planes, the stronger the 1-form is. The 2-forms are pictured as flux tubes. The thinner the tubes, the stronger the flow. The difference between a twisted and an untwisted form accounts for the two different types of 1- and 2-forms, respectively.

With this picture it is quite easy to imagine what he means by twisted forms!

A problem with this is that not all forms are representable by such a picture -- take $dx \wedge dy + dz \wedge dw$ for example. Dimension 3 has some very nice things going for it but it also restricts your intuition as to what a form is.
–
Ryan BudneyJan 4 '10 at 0:13

Misner, Thorne, and Wheeler's Gravitation is very good at providing a treatment of differential forms that appeals to physicists. But it is no longer the preeminient GR reference (though it's perfectly fine, its size also is an issue), so be warned. Dubrovin, Fomenko, and Novikov's Modern Geometry is also very good, but less structured.

As far as I can tell (only on chapter 4) GA excludes visualizing general multivectors like $a \wedge b + c \wedge d $, but as Dan Piponi pointed out here:
http://homepage.mac.com/sigfpe/Mathematics/forms.pdf
your probably okay thinking of that construction as two parallelograms.

I second the recommendation to at least flip through Gravitation. It has an intimidating size, but easygoing manner. I had a lot of difficulty with Spivak's Calculus on Manifolds (which has essentially no physical intuition outside the Archimedes exercise at the end), but I think I was uncomfortable with the abstract notions of tensor product and dual vector space at the time I was learning from it. At some point I caught on that df was supposed to eat vector fields and produce functions, and things got a little better.

Edit: I like to think of abstract forms as "things to integrate" and Stokes's theorem as some kind of adjunction between boundaries and the derivative. This becomes a bit more meaningful when homology and cohomology are introduced. I don't have much advice for connecting with physical intuition, but I have found it useful to:

Decompose div, grad and curl in terms of d and the metric.

Work through some E&M starting from a 1-form (strictly speaking a U(1)-connection) A on $\mathbb{R}^{1,3}$ (see Wikipedia).

Harold Edwards' book Advanced Calculus: A Differential Forms Approach starts with forms as the basic objects, and gives really nice intuitive explanations. It was written in 1969, as an undergraduate text from an unconventional point of view, and is still available from Birkhauser. But Edwards told me a few years ago that it was probably too hard for today's students (everything is done quite rigorously).