Chain Rule for Derivative — Venturing Into The Dark Side Beneath Applied Calculus…

To the surprise of many math enthusiasts and the like, it seems that we have been pulling out on an incredible amount of calculus modules these days. Of course, if you have had any terrible experience in learning just the mechanics of calculus, this seemingly-relentless outpouring of materials could make you want to puke. However, if that header image appeals to you at first sight, it could very well be that your brain was, subconsciously or otherwise, operating alone the line of:

Ah! That lovely Chain Rule from the calculus textbooks/classes back in the good old days!

Except that this time, we have decided to venture into an aspect of it most calculus users probably have never seen before ((or never bother to see?). That is, the somewhat-obscure side about how it is derived in theory — and the intuition behind it. Granted, if you’re coming from a background of applied mathematics, all this might sound a bit gibberish/nerve-wracking — let alone will it be very useful to you. However, if you’re on your way of joining the rank of mathematicians, but the proofs of Chain Rule never seem to click, then here is your chance!

In which case, we can refer to $f$ as the outer function, and $g$ as the inner function. Under this setup, the function $f \circ g$ maps $I$ first to $g(I)$, and then to $f[g(I)]$.

In addition, if $c$ is a point on $I$ such that:

The inner function $g$ is differentiable at $c$ (with the derivative denoted by $g'(c)$).

The outer function $f$ is differentiable at $g(c)$ (with the derivative denoted by $f'[g(c)]$).

then it would transpire that the function $f \circ g$ is also differentiable at $c$, where:

\begin{align*} (f \circ g)'(c) & = f'[g(c)] \, g'(c) \end{align*}

giving rise to the famous derivative formula commonly known as the Chain Rule.

Theorem 1 — The Chain Rule for Derivative

Given an inner function $g$ defined on $I$ and an outer function $f$ defined on $g(I)$, if $c$ is a point on $I$ such that $g$ is differentiable at $c$ and $f$ differentiable at $g(c)$ (i.e., the image of $c$), then we have that:

The derivative of a composite function at a point, is equal to the derivative of the inner function at that point, times the derivative of the outer function at its image.

As simple as it might be, the fact that the derivative of a composite function can be evaluated in terms of that of its constituent functions was hailed as a tremendous breakthrough back in the old days, since it allows for the differentiation of a wide variety of elementary functions — ranging from $\displaystyle (x^2+2x+3)^4$ and $\displaystyle e^{\cos x + \sin x}$ to $\ln \left(\frac{3+x}{2^x} \right)$ and $\displaystyle \text{arcsec} (2^x)$.

thereby showing that any composite function involving any number of functions — if differentiable — can have its derivative evaluated in terms of the derivatives of its constituent functions in a chain-like manner. Hence the Chain Rule.

All right. Let’s see if we can derive the Chain Rule from first principles then: given an inner function $g$ defined on $I$ and an outer function $f$ defined on $g(I)$, we are told that $g$ is differentiable at a point $c \in I$ and that $f$ is differentiable at $g(c)$. That is:

Great! Seems like a home-run right? Well, not so fast, for there exists two fatal flaws with this line of reasoning…

First, we can only divide by $g(x)-g(c)$ if $g(x) \ne g(c)$. In fact, forcing this division now means that the quotient $\dfrac{f[g(x)]-f[g(c)]}{g(x) – g(c)}$ is no longer necessarily well-defined in a punctured neighborhood of $c$ (i.e., the set $(c-\epsilon, c+\epsilon) \setminus \{c\}$, where $\epsilon>0$). As a result, it no longer makes sense to talk about its limit as $x$ tends $c$.

And then there’s the second flaw, which is embedded in the reasoning that as $x \to c$, $Q[g(x)] \to f'[g(c)]$. To be sure, while it is true that:

Given an inner function $g$ defined on $I$ (with $c \in I$) and an outer function $f$ defined on $g(I)$, if the following two conditions are both met:

As $x \to c$, $g(x) \to G$.

$f(x)$ is continuous at $G$.

then as $x \to c $, $(f \circ g)(x) \to f(G)$.

In any case, the point is that we have identified the two serious flaws that prevent our sketchy proof from working. Incidentally, this also happens to be the pseudo-mathematical approach employed by Lord Salman Khan to derive the Chain Rule. Sad face.

In which case, begging seems like an appropriate future course of action…

then $\mathbf{Q}(x)$ would be the patched version of $Q(x)$ which is actually continuous at $g(c)$. One puzzle solved!

All right. Moving on, let’s turn our attention now to another problem, which is the fact that the function $Q[g(x)]$, that is:

\begin{align*} \frac{f[g(x)] – f(g(c)}{g(x) – g(c)} \end{align*}

is not necessarily well-defined on a punctured neighborhood of $c$. But then you see, this problem has already been dealt with when we define $\mathbf{Q}(x)$! In particular, it can be verified that the definition of $\mathbf{Q}(x)$ entails that:

Translation? The upgraded $\mathbf{Q}(x)$ ensures that $\mathbf{Q}[g(x)]$ has the enviable property of being pretty much identical to the plain old $Q[g(x)]$ — with the added bonus that it is actually defined on a neighborhood of $c$!

And with the two issues settled, we can now go back to square one — to the difference quotient of $f \circ g$ at $c$ that is — and verify that while the equality:

Wow! That was a bit of a detour isn’t it? You see, while the Chain Rule might have been apparently intuitive to understand and apply, it is actually one of the first theorems in differential calculus out there that require a bit of ingenuity and knowledge beyond calculus to derive.

And if the derivation seems to mess around with the head a bit, then it’s certainly not hard to appreciate the creative and deductive greatness among the forefathers of modern calculus — those who’ve worked hard to establish a solid, rigorous foundation for calculus, thereby paving the way for its proliferation into various branches of applied sciences all around the world.

And as for you, kudos for having made it this far! As a token of appreciation, here’s an interactive table summarizing what we have discovered up to now:

for all $x$ in a punctured neighborhood of $c$. In which case, the proof of Chain Rule can be finalized in a few steps through the use of limit laws.

And with that, we’ll close our little discussion on the theory of Chain Rule as of now. By the way, are you aware of an alternate proof that works equally well? If so, you have good reason to be grateful of Chain Rule the next time you invoke it to advance your work!

How's yourhigher mathgoing?

Shallow learning and mechanical practices rarely work in higher mathematics. Instead, use these 10 principles to optimize your learning and prevent years of wasted effort.

About the Author

Math Vault and its Redditbots enjoy advocating for mathematical experience through digital publishing and the uncanny use of technologies. Check out their 10-principle learning manifesto so that you can be transformed into a fuller mathematical being too.

Related Posts

Leave a Comment:

Name *E-Mail *Website

Save my name, email, and website in this browser for the next time I comment.

Comments:

Get notified of our latest freebies and developments

(9) comments

Anitej Banerjee June 26, 2016

Wow, that really was mind blowing! I did come across a few hitches in the logic — perhaps due to my own misunderstandings of the topic. Firstly, why define g'(c) to be the lim (x->c) of [g(x) – g(c)]/[x-c]. If you were to follow the definition from most textbooks: f'(x) = lim (h->0) of [f(x+h) – f(x)]/[h] Then, for g'(c), you would come up with: g'(c) = lim (h->0) of [g(c+h) – g(c)]/[h] Perhaps the two are the same, and maybe it’s just my loosey-goosey way of thinking about the limits that is causing this confusion… Secondly, I don’t understand how bold Q(x) works. I understand the law of composite functions limits part, but it just seems too easy — just defining Q(x) to be f'(x) when g(x) = g(c)… I can’t pin-point why, but it feels a little bit like cheating :P.

Lastly, I just came up with a geometric interpretation of the chain rule — maybe not so fancy :P. f(g(x)) is simply f(x) with a shifted x-axis [Seems like a big assumption right now, but the derivative of g takes care of instantaneous non-linearity]. g'(x) is simply the transformation scalar — which takes in an x value on the g(x) axis and returns the transformation scalar which, when multiplied with f'(x) gives you the actual value of the derivative of f(g(x)). I like to think of g(x) as an elongated x axis/input domain to visualize it, but since the derivative of g'(x) is instantaneous, it takes care of the fact that g(x) may not be as linear as that — so g(x) could also be an odd-powered polynomial (covering every real value — loved that article, by the way!) but the analogy would still hold (I think).

Leave a Comment:

Hi Anitej. For the first question, the derivative of a function at a point can be defined using both the x-c notation and the h notation. In fact, using a stronger form of limit comparison law, it can be shown that if the derivative exists, then the derivative as defined by both definitions are equivalent.

For the second question, the bold Q(x) basically attempts to patch up Q(x) so that it is actually continuous at g(c). Now, if we define the bold Q(x) to be f'(x) when g(x)=g(c), then not only will it not take care of the case where the input x is actually equal to g(c), but the desired continuity won’t be achieved either.

And as for the geometric interpretation of the Chain Rule, that’s definitely a neat way to think of it!

Leave a Comment:

Save my name, email, and website in this browser for the next time I comment.

Comments

Get notified of our latest freebies and developments

Anitej Banerjee June 27, 2016

Well that sorts it out then… err, mostly. But why resort to f'(c) instead of f'(g(c)), wouldn’t that lead to a very different value of f'(x) at x=c, compared to the rest of the values [That does sort of make sense as the limit as x->c of that derivative doesn’t exist]? Either way, thank you very much — I certainly didn’t expect such a quick reply! 🙂

Leave a Comment:

Oh. It is f'[g(c)]. Remember, g being the inner function is evaluated at c, whereas f being the outer function is evaluated at g(c). In particular, the focus is not on the derivative of f at c. You might want to go through the Second Attempt Section by now and see if it helps.

Originally a Montreal-based math tutoring agency, Math Vault has since then expanded into an alternative digital publisher of higher mathematics — serving tens of thousands of higher math learners and enthusiasts all over the world.