Applications of Partial Differentiation

Abstract

In one-variable calculus, it is customary to apply the notion of differentiation to study local and global extrema of real-valued functions of one variable. (See, for example, Chapter 5 of ACICARA.) Here, we shall consider similar applications of the notion of differentiation to functions of two (or more) variables. As noted in Chapter 3, in multivariable calculus, the notion of differentiation manifests itself in several forms. The simplest among these are the partial derivatives, which together constitute the gradient. When the gradient exists, its vanishing turns out to be a necessary condition for a function to have a local extremum. We shall use this in Section 4.1 below to arrive at a useful recipe for determining the absolute (or global) extremum of a continuous realvalued function defined on a closed and bounded subset of R2. A variant of the optimization problem discussed in the first section will be considered in Section 4.2, where we try to determine the maximum or the minimum of a function subject to one or more constraints. Such problems are nicely and effectively handled by a technique known as the method of Lagrange multipliers. The theoretical as well as practical aspects of this method will be discussed here. In Section 4.3, we shall make a finer analysis involving the second-order partial derivatives to arrive at a sufficient condition for a function to have a local maximum or a local minimum or a saddle point. Finally, in Section 4.4, we revisit the Bivariate Taylor Theorem with a view toward approximating functions of two variables by linear or quadratic functions.

Keywords

Saddle Point Interior Point Local Extremum Absolute Minimum Absolute Maximum

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.