I am in the process of redesigning the calculus course that I have taught five or six times. What I would like to know is if anyone has some really good examples or exercises that I could either do in class or give as a project. In particular, I've found that I don't have many good examples/exercises that illustrate the awesomeness of the main theorems (Intermediate Value Theorem, Mean Value Theorem, etc.). All levels of difficulty are certainly appreciated. The intent is to have material that I can present or assign here and there throughout the course that goes beyond basic calculus and will challenge even those to whom math comes naturally.

An example of what I'm looking for is something like showing a continuous function on $S^1$ has to map two antipodal points to the same value.

EDIT: In response to Qiaochu Yuan, Calc I and II together form all of single variable calculus. For Calc I: limits, differentiation, Riemann integration (improper as well). For Calc II: sequences, series, polar coordinates, parametric coordinates. The old book for this course was Stewart's "Calculus: Early Transcendentals", but I don't follow any book when I teach.

Calculus I, II, and III don't have uniform meanings in the United States, let alone the rest of the world. So you should be more precise about what this means.
–
Qiaochu YuanJan 20 '11 at 22:10

19

Harry: it is possible to learn analysis first using the Riemann integral, and then the Lebesgue integral. I have had the privilege of knowing several people for whom this order of doing things seems to have had no discernible detrimental effect
–
Yemon ChoiJan 20 '11 at 23:49

15

@Harry: your comments about Dieudonne and the regulated integral are squarely off-topic for this question. You seem to answer all pedagogical questions from the viewpoint "I wish I was taught the material in this way..." This is not especially mature or helpful. Also, when you venture to give advice to those who have taught calculus five or six times, if you have never taught it yourself perhaps you should mention that as a disclaimer.
–
Pete L. ClarkJan 21 '11 at 5:36

6

I agree wholeheartedly with Harry that Calc 3 should wait until students have seen linear transformations. How can you make sense out of local linearity before you have seen linearity?
–
Steven GubkinJan 21 '11 at 14:57

There are some evaluations of zeta(2) which (modulo perhaps a tiny amount of handwaving) only require (possibly multivariable) calculus: see Robin Chapman's collection of 14 proofs at scipp.ucsc.edu/~haber/ph116A/zeta2.pdf .
–
Qiaochu YuanJan 21 '11 at 21:39

Thank you, I will profit of this collection too. As to proof 7 (Euler's original), the proof of Euler's product for sin(x) is also quite doable in Calculus courses, via passage to the limit in the complete factorization of the polynomials $(1+ix/n)^n - (1-ix/n)^n$ (which is itself a nice computation).
–
Pietro MajerSep 15 '13 at 13:28

The theorems of calculus that you mention are all obviously true, and so a proper bed must be made for their discussion to not seem pedantic. Counterexamples are crucial, of course. Three of my favorites are:

$f(x)=\exp(-1/x^2)$ if $x\not=0$, and $f(0)=0$. This function has (all) derivatives at 0, but you need the limit definition to prove this. Moreover, all of those derivatives are 0. This makes it a good example (later) of a function with a Taylor Series that converges for all $x$ but has a radius of convergence of only 0. That is, just because the Taylor Series of $f$ converges doesn't mean that it converges to $f$. Remember to make the point that $f(x)+\sin(x)$, for example, has the same problem even though its Taylor Series (centered at 0) doesn't have any obvious problems.

A polynomial can be nonnegative, and yet never achieve its minimum. This is obviously impossible in the eyes of all students, I've found. They will love to see an example, like $p(x,y)=x^2+(1-xy)^2$, and it will help them view Rolle's Theorem with the proper skepticism and awe.

Another nice visual example is $g(x)=\exp(-x^2) \sin(1/x)^2$, a fairly ordinary looking function from the undergrad's viewpoint, but is visually striking, and makes it clear why some hypotheses are needed for the theorems of calculus. Specifically, it is another way of never achieving one's maximum.

I have little personal experience with it, but some colleagues and friends hold the following text in high regard:

Robert M. Young, Excursions in calculus.
An interplay of the continuous and the discrete. The Dolciani Mathematical Expositions, 13. Mathematical Association of America, Washington, DC, 1992.

And here is a very favorable MathSciNet review by F.J. Papp.

This book does not belong on the office shelf of every mathematics instructor, nor does it belong on the book shelves of our students, neither does it belong on our library shelves. Rather, this book belongs on the desk right next to one's current course texts (and not just the calculus texts); as far as the library is concerned, ideally, this title should be nearly always checked out and in constant use. The book is one of those rare works that one can read from cover to cover with great pleasure and profit, or one can simply open to a randomly selected page and begin reading. Anyone with the least interest in mathematics will find something interesting and intriguing on just about every page and will in all likelihood not be able to stop with just one page. The "underlying theme is the elegant interplay that exists between the two main currents of mathematics, the continuous and the discrete''. In the preface, the author rather modestly speaks of his book as one possible supplement to a more traditional calculus course. It is that, of course, but it is also much more than that. It will also serve the very crucial purpose of educating our students and will help them understand that mathematics is not merely a collection of nonoverlapping, unrelated subdisciplines. Rather, they will experience their coursework in a new and much healthier way as, with the aid of this book, they begin to comprehend the essential unity of mathematics and the remarkably synergistic ways in which seemingly unrelated areas of mathematics can interact. Helping one's students to decompartmentalize their understanding of mathematics is in itself worth the price of the book.
Each of the six chapters is divided into several subsections, all but one of which concludes with a number of interesting problems. The subsection titles, included below, will give a tantalizing hint of the fascinating variety of topics to be found in this book. Contents: 1. Infinite ascent, infinite descent: the principle of mathematical induction (Patterns/Proof by induction/Applications/Infinite descent); 2. Patterns, polynomials, and primes: three applications of the binomial theorem (Disorder among the primes/Summing powers of the integers/Two theorems of Fermat, the "little'' and the "great''); 3. Fibonacci numbers: function and form (Elementary properties/The golden ratio/Generating functions/Iterated functions: From order to chaos); 4. On the average (The theorem of the means/The law of errors/Variations on a theme); 5. Approximation: from pi to the prime number theorem ("Luck runs in circles''/On the probability integral/Polynomial approximation and the Dirac delta function/Euler's proof of the infinitude of the primes/The prime number theorem); 6. Infinite sums: a potpourri (Geometry and the geometric series/ Summing the reciprocals of the squares/The pentagonal number theorem); Appendix: The congruence notation. Also included is a very extensive set of 463 references to books and articles. The final two parts of the book are a "sources for solutions'' section and the index. The sources section gives a problem-by-problem cross reference to one or more of the items in the list of references or else indicates that the problem is (at present) unsolved. The book also contains eight color plates (mostly dealing with fractals), numerous additional figures, and a variety of tables. The one section not having a set of problems is the final section of Chapter 6. This section is essentially Pólya's translation from the French of Euler's memoir on his "pentagonal number theorem''—thus giving readers the opportunity, as Abel put it, to "study [one of] the masters''.
Any book published in the Dolciani Mathematical Expositions series will naturally be measured against the remarkably high standards already well established by the previously published titles. In the present case, however, it is safe to say that this latest addition (the thirteenth in the series) not only easily meets the previously set standards but sets an entirely new standard for future volumes.

I'll heartily second this suggestion, even if it's over a year since it was made. The book is a wonderful resource for teaching calculus and grabbing the attention of good students.
–
Todd EisworthNov 10 '11 at 13:52

5

+1 for that review. If the book is half as nicely written, it could still be a masterpiece.
–
Kevin O'BryantNov 10 '11 at 15:54

I gave my Calculus II class a problem which introduced the Koch Snowflake and asked them to compute the area. They enjoyed it, and I think that had something to do with seeing an actual, geometric application of the infinite geometric series formula. Also: I loved the expression on their faces when I told them it had infinite perimeter.

I also gave them Gabriel's Horn as an in-class group exercise on surfaces of revolution and limits in general. This was earlier in the semester than the Koch problem, but it didn't have the same pizazz, even when I phrased it as: ``you can fill it with paint, but you can't paint it.'' I think they secretly believed limits were simply evil and that they should not trust things proven with limits.

That "painting" metaphor used to seem cool but now bugs me. It breaks down when you realize that "paint" is actually used in two different ways: once as a three-dimensional object and once as a two-dimensional object. The interface between the paint filling and the horn is 2-dimensional, but the inside is not "painted" in the same sense as the outside. If you tried this with a "real" Gabriel's Horn, the nonzero extent of the molecules would make both filling and covering the horn with paint impossible.
–
Ryan ReichNov 10 '11 at 17:17

Actually, I was just thinking of commenting more about this painting metaphor, but not in the direction of your comment. When I gave this exercise, the teacher of the other section stole it and tweaked it in the following way. After proving the infinite surface area/finite volume thing he said students could imagine taking 2 copies of the Horn, filling Horn1 with paint and dipping Horn2 into it. It will fit perfectly inside Horn1 and thus be coated with paint. He claimed you could thereby paint an infinite surface area with a finite volume of paint. I don't know how his students responded
–
David WhiteNov 10 '11 at 22:35

2

In order for this to be paradoxical, you need to assume the existence of an idealized form of paint that can stick to arbitrarily small bits of surface, yet cannot be spread arbitrarily thinly.
–
S. Carnahan♦Nov 10 '11 at 23:50

When you fill the horn, it is painting the inside of the horn. It is just that the coat of paint is getting thinner and thinner as you go down the horn.
–
SteveFeb 28 '13 at 20:32

A simple illustration of the Mean Value theorem (particularly good for a less theoretical course, but I like it in any setting): A man is photographed at a tollbooth at 12:00, and then arrives another tollbooth, 250 miles down the road, at 2:00. A cop pulls him over and gives him a traffic ticket for driving 125 mph.

His defense lawyer claims in court, "You can't prove that there was ever any particular moment when my client was actually driving 125 mph..."

I like to wade slowly into infinite series with the following two examples.

(1) Consider the following "proof" that 0=1.
$$\begin{eqnarray*}
0 &=& (1-1) \\
&=& (1-1) + (1-1) \\
&=& (1-1) + (1-1) + \dots \\
&=& 1 + (-1+1) + (-1+1) + \dots \\
&=& 1
\end{eqnarray*}$$
Students like this one because it feels like a party trick. But it's a useful illustration of the danger of handling infinite sums as if they were really long finite sums--assuming that every infinite series converges, and casually rearranging the order of summation--and will help you emphasize that infinite sums really can't work the same way that finite sums do.

(2) You can prove that $0.999\dots = 1$ with series as follows.
$$ 0.999\dots = \sum_{i=1}^{\infty} \frac{9}{10^i} = \sum_{i=1}^{\infty} \frac{10-1}{10^i} = \sum_{i=1}^{\infty} \left(\frac{1}{10^{i-1}} - \frac{1}{10^i}\right) = \frac{1}{10^0} = 1
$$
(You'll have to convince them that the last equality comes from infinitely many cancellations, but after example (1) they might think this is more of your numerical prestidigitation.) This example has a nice morale to it: that real numbers don't necessarily have unique decimal representations. It also gives students a taste for the kind arithmetic they'll be doing later on.

A neat little exercise which uses the mean value theorem is to prove that if $f:\mathbb{R}\rightarrow\mathbb{R}$ is a function continuous in a non-empty open interval $I$ containing $a\in\mathbb{R}$ and differentiable on $I$ except perhaps at $a$, then $\lim_{x\rightarrow a}f'(x)=L\in\mathbb{R}$ implies that $f$ is differentiable at $a$ with derivative $L$.

It seems an odd kind of thing to prove but I taught it to students so they had an easier method to prove that functions that are defined by differentiable functions $u:\mathbb{R}\rightarrow \mathbb{R}$, $v:\mathbb{R}\rightarrow\mathbb{R}$ for, respectively, $x$ less than and greater than or equal to $a$, are differentiable (or not) at $a$.

Since nobody mentioned it yet, how about proving convergence and finding limits of some recursively defined sequences of real numbers? A few not too hard examples are:

1) $x_1=a, \ x_2=b, \ x_n=\frac{x_{n-1}+x_{n-2}}{2}$;

2) $x_0 >0, \ x_{n+1}=\frac{1}{2}(x_n+\frac{1}{x_n})$;

3) $x_0 = \sqrt{2}, \ x_{n+1}=\sqrt{2+x_n}$.

This gives you something instructive to do even before you discuss differentiation and integration. In my experience, most students figured out how to compute these limits; proving convergence was a harder sell, but in cases 2) and 3) it is accessible even to an average student.

Here is a problem that requires nontrivial integration techniques, because the closed form answer is a sum of an algebraic and trigonometric function. If you have ever pulled down blinds by the edge, the parallel slats slant down and make an envelope for a certain curve. In the limit of infinitely dense slats, what is this curve? In case you can't picture this, the curve is defined by the condition that the length of the tangent line from the curve to the y-axis is constant. This is one of the few cases where a reasonably general integral comes out naturally. As for the IVT/MVT, they are not particularly profound. It is in my opinion better to prove things by bisection (which easily proves both, gives intuitive proofs for all their consequences, and essentially is sequential compactness). Bisection was used in 19th century texts, but fell out of favor when the completeness of the reals became standardly axiomatized as the least upper bound principle.

These sorts of envelope problems can be fun, maybe a bit on the challenging side depending on the class. What is the envelope of a bifold closet door (the shape of curve it leaves on the nap of the carpet as you open and close it)? Or, for fixed $k$, describe the curve whose envelope is the set of lines connecting $(x, 0)$ to $(0, k-x)$ (maybe rotate this by 45 degrees counterclockwise, and try to describe this curve as the graph of a function $y = f(x)$).
–
Todd Trimble♦Aug 9 '11 at 0:14

The following are quite standard topics, but not usually done elementarily.

1. Construct the real exponential function, starting from the definition $\exp(x):=\lim_{n\to\infty}(1+x/n)^n$, and prove all properties from there: this approach provides a sequence of nice, instructive and well-motivated exercises. In particular, the monotonicity of $(1+x/n)^n\le (1+x/(n+1))^{n+1}$ (for $n>-x$) can be proved by the arithmetic-geometric mean inequality. Also, a key lemma is: if $x_n\to x$ then $\lim(1+x_n/n)^n\to \exp(x)$, that easily implies $\exp(x+y)=\exp(x)\exp(y)$. A simple "dominated convergence theorem" for series yields to the exponential series. A construction of the logarithm is $\log(u):=\lim_{n\to\infty} n(u^{1/n}-1)$, where the convergence follows from the above mentioned lemma.

2. Construct the complex exponential function, defined again as $\exp(z):=\lim_{n\to\infty}(1+z/n)^n=\sum_{k=0}z^k/k!$. Only novelty: now there is no useful monotonicity, and the convergence is proven preferably form the series.
For the rest, almost all computations of 1 can be translated in the complex case with no change. From the complex exponential, a treatment of the trigonometric functions; polar representation of complex numbers; roots of unity, can easily be deduced.

3. The Euler product for $\sin(z)$. Define $\sin(x):=\frac{e^{ix}-e^{-ix}}{2i}$; then approximate it by the polynomials $p_n(x):=\frac{(1+ix/n)^n-(1-ix/n)^n}{2i}$. These polynomials have simple roots that are easily computed; a multiplicative version of the "dominated convergence theorem" of 1 allows to pass to the limit and yields to the Euler infinite product of $\sin(x)$.

4. The construction of the real Gamma function, starting from the Artin-Bohr-Mollerup characterization. By taking logarithms and derivatives one reduces to studying the functional equation $f(x+1)=f(x)+1/x$ . From this characterization it is easy to derive all representations of the Gamma function, together with all main formulas (usually, these can put in the form $\Gamma(x)=\mathrm{ something}$, and follow from the ABM characterization proving that the RHS solves the functional equation and is logarithmically convex).

For a project in calculus, I like the problem of the brachistochrone, the students need to investigate about the curve that minimizes the time that takes a bead rolling down from a point to another one not right below the first one. They also are asked to do a prototype of the curve.
A calculus book that I like, and use in my courses is the one by George F. Simmons (Calculus with Analytic Geometry)

That's not really much of a problem, but still a nice stumbling block for inexperienced and an example of the evilness of formal symbol manipulations, even innocent-looking ones.

Consider a function $f:(x,y)\to \mathbb{R}$ and a change of coordinates $(x,y)\mapsto (x,xy)$. Find partial derivatives in this new chart. If one is acting formally, one can assume $$\left(\frac{\partial f}{\partial x}\right)_{new} = \left(\frac{\partial f}{\partial x}\right)_{old}$$ but then the change-of-chart formula gives $$\frac{\partial f}{\partial x} = \frac{\partial f}{\partial x} + y \frac{\partial f}{\partial (xy)}$$ so $\frac{\partial f}{\partial (xy)} = 0$ - an obvious contradiction. This is a good example of abuse of notation leading to fallacy and a reminder that partial derivatives are taken not with respect to a lone coordinate, but with respect to a chart, a vector field in a general case. Also an example of the relative nature of coordinates.

I think for a very elementary calculus course, one would do better teaching only physics. In particular, I would remove the complication of integration methods, except for the more elementary ones. I would give more emphasis to the meaning of differential equations. I would focus in constructing models and fiddling around with them, and giving the appropriate definitions along the way.

For the next level of calculus, I would use Spivak's text that is definitely more advanced, but gives a flavor of real math together with a huge set of wonderful exercises.