What interesting applications are there for theorems or other results studied in first-year calculus courses?

A good example for such an application would be using a calculus theorem to prove a result in group theory. On the other hand, the importance of calculus in applied mathematics or in physics is well known, therefore is not a good example.

Right, surely she means "trivial to demonstrate", which seems to be true.
–
Pete L. ClarkAug 10 '10 at 21:55

5

I think there's a fundamental question that needs to be addressed before this question can be answered:What do we mean by "first year calculus"? This varies a lot-from highly theoretical honors courses like Spivak to plug and chug courses like Stewart's.What's meant by first year calculus in general?
–
Andrew LAug 12 '10 at 2:56

3

@Irene Ok,but I think this has to come with a disclaimer in that case. Spivak is NOT your typical first year calculus course for typical undergraduates in the U.S.
–
Andrew LAug 12 '10 at 22:54

Following on from the Galois theory example of Johannes, one straightforward way to produce an explicit polynomial with non-soluble Galois group over ${\mathbb Q}$ is to use an irreducible quintic with exactly three real roots, which necessarily has Galois group $S_5$. To check that an explicit polynomial (such as $x^5-4x+2$ if I am not mistaken, I am typing from memory) has this latter property reduces to standard calculus arguments such as "differentiate, find turning points, estimate values, use intermediate value theorem". I always find this calculus interlude at the end of half a semester of algebra quite amusing.

The mean-value theorem (of differential calculus) can be used to prove that Liouville numbers are transcendental. The proof is quite simple, taking only a couple of lines. See Theorem 191 of Hardy and Wright's "An Introduction to the Theory of Numbers" on Google books.

I believe, historically, that these were the first known examples of transcendental numbers.

An interesting application of calculus is the elementary polynomial case of Mason's ABC theorem. This yields, for instance, a completely trivial proof of the polynomial case of FLT (Fermat's Last Theorem). That this works so effectively for polynomials (functions) vs. numbers is due to the fact that for functions we have available the derivative, which implies that we can exploit Wronskians as a measure of algebraic independence. Such Wronskian estimates serve as fundamental tools in diophantine approximation. See my post [1] for further details and references.

An example that I like is the proof that $e^{A+B}=e^A e^B$ for commuting
matrices $A,B$. Since the matrix exponential is defined by the usual
exponential series, we have to prove that

$\sum \frac{(A+B)^n}{n!}=\sum\frac{A^n}{n!}\sum\frac{B^n}{n!}$

This follows, without actually computing the two sides, by observing
that it is the same computation as for real numbers $A,B$ (because $A$
and $B$ commute). And for real numbers we know the result is correct by
first-year calculus.

This proof is slick, but I am not convinced of its merits: are you proposing to prove an $\textit{algebraic}$ identity using a functional interpretation? In principle (although not in this particular instance), it could happen that different algebraic expressions evaluate to the same function (if this had happened for numbers, but not for matrices, it wouldn't have worked). Perhaps, I am misunderstanding what you mean when you say "This follows, without actually computing the two sides, by the same computation as for real numbers".
–
Victor ProtsakAug 11 '10 at 0:34

2

John, by "same computation" do you mean the following: we would like to interpret $e^A e^B = e^{A + B}$ as an identity in formal power series with commuting variables $A, B$. To do so, we observe the map, given by Taylor series, from two-variable analytic real functions to $R[[A,B]]$, contains both sides of the equation, and that the equation holds of the actual functions by means of some calculus argument.
–
Ryan ReichAug 11 '10 at 1:45

The combinatorial proof is more conceptual and almost as short. The left-hand side is the data type for $(A \cup B)$-multisets. The right-hand side is the data type for pairs of $A$-multisets and $B$-multisets. Obviously they are isomorphic. (You can now take images to analytic power series.) This is essentially the same as the combinatorial proof of the binomial formula but without the messy detour through binomial coefficients.
–
Per VognsenAug 11 '10 at 16:22

1

Nice proof, Per. I also like to see applications of combinatorics to first-year calculus :)
–
John StillwellAug 11 '10 at 23:03

Going in a completely different direction: a surprising application of calculus is the use of the Leibniz and chain rules to differentiate data types to create new types that represent structures with 'holes' in them. See here for an elementary exposition.

(This is closely related to the differentiation of generating functions and combinatorial species.)

I think you could probably show a smart group of first year calculus students how to get an exact formula for the Fibonnaci numbers using generating functions, which basically just boils down to knowing partial fraction decomposition and a few standard power series. You could then point them to Wilf's book if this makes them curious about generating functions in combinatorics.

The notion of a formal derivative of a polynomial over some ring comes from the ordinary derivative of a polynomial over the real and complex numbers. Furthermore, results true over the real numbers, such as that $(fg)'=f'g+g'f$ and $(f \circ g)' = (f' \circ g) g'$, continue to hold over arbitrary rings. However, these results are much easier to prove over the real numbers using analytic techniques, and one might legitimately argue that mathematicians were only led to the corresponding formal results by the inspiration of the results in calculus.

Furthermore, using something along the lines of the Lefschetz principle, one can probably derive the identities for formal derivatives from the corresponding facts for derivatives of polynomials over the complex numbers.

Two comments. First, I don't agree that it is "much easier" to use the limit defn. of the real derivative to prove differentiation rules than it is to use induction for proving the same rules on polynomials over any (comm.) ring. Second, it is not legitimate to argue that the formal derivative was introduced only after calculus was developed. The formal derivative on real polynomials came first! See J. Grabiner, "The Changing Concept of Change: the Derivative from Fermat to Weierstrass," Math. Magazine 56 (1983), 195-206. On JSTOR this is at jstor.org/stable/2689807
–
KConradAug 10 '10 at 21:09

1

To avoid any misunderstanding, I am not suggesting the formal defn. of derivative should come first in teaching, since of course the real derivative in calculus provides extra geometric intuition.
–
KConradAug 10 '10 at 21:11

Can you give some references for derivatives of polynomials over general rings? (The stranger, the better). Thanks!
–
Jose BroxAug 11 '10 at 11:50

A few years ago I gave a departmental colloquium talk, aimed at beginning M.A. students, on "An application of calculus to ring theory." A slightly facetious little abstract can be found here. The example establishing the main resultsvery well known to workers in commutative ring theorywas the ring of germs at $0$ of class $C^{\infty}$ functions on $\mathbb{R}$. A bit of calculus is needed in verifying the requisite properties.

A nice application of calculus that leads to a surprising and far reaching result, first obtained by the great Gauss himself, is the computation of the Arithmetic-Geometric Mean of two numbers $a > b > 0$. A comparatively short way to this end is presented on the first pages of J. and P. Borweins "Pi and the AGM".

In number theory, here are four applications of techniques or results in first-year calculus.

(1) Finding equations of tangent lines by first-semester calculus methods lets us add points on elliptic curves using the Weierstrass equation for the curve. This is more algebraic geometry than number theory, so I'll add that the methods show if the Weierstrass equation has rational coefficients then the sum of two rational points is again a rational point.

(2) The recursion in Newton's method from differential calculus is the basic idea behind Hensel's lemma in $p$-adic analysis (or, more simply, lifting solutions of congruences from modulus $p$ to modulus $p^k$ for all $k \geq 1$).

(3) The infinitude of the primes can be derived from the divergence of the harmonic series (the zeta-function at 1), which is based on a bound involving the definition of the natural logarithm as an integral.

(4) Unique factorization in the Gaussian integers can be derived from the Leibniz formula
$$
\frac{\pi}{4} = 1 - \frac{1}{3} + \frac{1}{5} - \frac{1}{7} + \frac{1}{9} - \cdots = \sum_{n \geq 0} \frac{(-1)^n}{2n+1}
$$
by interpreting it as a case of Dirichlet's class number formula $2\pi h/(w\sqrt{|D|}) = L(1,\chi_D)$ for $\chi_D$ the primitive quadratic character associated to ${\mathbf Q}(\sqrt{D})$ where $D$ is a negative fundamental discriminant, $h$ is the class number of ${\mathbf Q}(\sqrt{D})$ and $w$ is the number of roots of unity in ${\mathbf Q}(\sqrt{D})$. Taking $D = -4$ turns the left side into $2\pi h/(4\sqrt{4}) = (\pi/4)h$, so the Leibniz formula is equivalent to $h = 1$, which is another way of saying $\mathbf Z[i]$ is a PID or equivalently (for Dedekind domains) a UFD.

Here are two more applications, not in number theory directly.

(5) Gerry Edgar mentions in his answer Niven's proof of the irrationality of $\pi$, which is available in Spivak's calculus book. The same ideas imply irrationality of $e^a$ for every positive integer $a$, which in turns easily implies irrationality of $e^r$ for nonzero rational $r$ and thus also irrationality of $\log r$ for positive rational $r \not= 1$. The calculus fact in the proof of irrationality of the numbers $e^a$ is that for all positive integers $n$ the polynomial
$$
\frac{x^n(1-x)^n}{n!}
$$
and all of its higher derivatives take integer values at $0$ and $1$. That implies a certain expression involving a definite integral is a positive integer, and then with the fundamental theorem of calculus that same expression turns out to be less than 1 for large $n$ (where "large" depends on the hypothetical denominator of a rational formula for $e^a$), and that is a contradiction.

(6) Prove that if $f$ is a smooth function (= infinitely differentiable) on the real line and $f(0) = 0$ then $f(x) = xg(x)$ where $g$ is a smooth function on the real line. There is no difficulty in defining what $g(x)$ has to be if it exists at all, namely
$$
g(x) = \begin{cases}
f(x)/x, & \text{ if } x \not= 0, \\
f'(0), & \text{ if } x = 0.
\end{cases}
$$
And easily the function defined this way is continuous on the real line and satisfies $f(x) = xg(x)$. But why is this function smooth at $x = 0$ (smoothness away from $x = 0$ is easy)? You can try to do it using progressively messier formulas for higher derivatives of $g$ at 0 by taking limits, but a much slicker technique is to use the fundamental theorem of calculus to write
$$
f(x) = f(x) - f(0) = \int_0^x f(t)\,dt = x\int_0^1 f(xu)\,du,
$$
which leads to a different formula for $g(x)$ that doesn't involve cases:
$$
g(x) = \int_0^1 f(xu)\,du.
$$
If you're willing to accept differentiation under the integral sign (maybe that's not in the first-year calculus curriculum, but we used first-year calculus to get the slick formula for $g(x)$) then the right side is easily checked to be a smooth function of $x$ from $f$ being smooth.

Shanks' simplest cubic $x^3-ax^2-(a+3)x-1$ has
discriminant $D=(a^2+3a+9)^2$ and hence 3 real roots. A calculus way
to see this is to rewrite it as inverting $f(x)=a$ where
$$f(x)=\frac{x^3-3x-1}{x^2+x}=x-1
-\frac{1}{x}-\frac{1}{x+1}$$ whose graph has three monotone branches
which clearly intersects $y=a$ at three real points for any real
$a$. In particular, we can pick $a$ to be any of the (real) roots
which means we can iterate the construction and get a cubic tower
of totally real fields. Also (though this is not relevant to the
question), it's nice to see $f$ is a trace
$$f(x)=x+\rho(x)+\rho^2(x),$$ where
$\rho(x)=-1/(x+1)$ is of order 3 in $ \in PSL_2(Z)$ which show that
if $\alpha$ is a root of the cubic , then so is $\rho(\alpha)$.

What I like(d) most is defining an analytic function that describes some number theoretic phenomena. One thing I remember is from Winfried Kohnen's postech lecture http://www.mathi.uni-heidelberg.de/~winfried/siegel2.pdf , see pages 1-3 for more details.
He starts with the standard inner product on $\mathbb{R}^m$ viewed as a quadratic form $$Q(x):=x^t x.$$
We are interested in the number $r_Q(t)$ of tuples of squares of inetegers that add up to a natural number $t$, i.e.

that is in fact $\theta_Q$ is a modular form of weight 2 w.r.t. $\Gamma_0$. Therefore (ok here is some kind of black box for the students), its Fourier coefficients can be given by
$$r_Q(t)= 8 \left( \sigma_1(t)-4\cdot \sigma_1\left(\frac{t}{4}\right) \right)$$