The minimize function provides a common interface to unconstrained
and constrained minimization algorithms for multivariate scalar functions
in scipy.optimize. To demonstrate the minimization function consider the
problem of minimizing the Rosenbrock function of variables:

The minimum value of this function is 0 which is achieved when

Note that the Rosenbrock function and its derivatives are included in
scipy.optimize. The implementations shown in the following sections
provide examples of how to define an objective function as well as its
jacobian and hessian functions.

The simplex algorithm is probably the simplest way to minimize a fairly
well-behaved function. It requires only function evaluations and is a good
choice for simple minimization problems. However, because it does not use
any gradient evaluations, it may take longer to find the minimum.

Another optimization algorithm that needs only function calls to find
the minimum is Powell‘s method available by setting method='powell' in
minimize.

In order to converge more quickly to the solution, this routine uses
the gradient of the objective function. If the gradient is not given
by the user, then it is estimated using first-differences. The
Broyden-Fletcher-Goldfarb-Shanno (BFGS) method typically requires
fewer function calls than the simplex algorithm even when the gradient
must be estimated.

To demonstrate this algorithm, the Rosenbrock function is again used.
The gradient of the Rosenbrock function is the vector:

This expression is valid for the interior derivatives. Special cases
are

A Python function which computes this gradient is constructed by the
code-segment:

The method which requires the fewest function calls and is therefore often
the fastest method to minimize functions of many variables uses the
Newton-Conjugate Gradient algorithm. This method is a modified Newton’s
method and uses a conjugate gradient algorithm to (approximately) invert
the local Hessian. Newton’s method is based on fitting the function
locally to a quadratic form:

where is a matrix of second-derivatives (the Hessian). If the Hessian is
positive definite then the local minimum of this function can be found
by setting the gradient of the quadratic form to zero, resulting in

The inverse of the Hessian is evaluated using the conjugate-gradient
method. An example of employing this method to minimizing the
Rosenbrock function is given below. To take full advantage of the
Newton-CG method, a function which computes the Hessian must be
provided. The Hessian matrix itself does not need to be constructed,
only a vector which is the product of the Hessian with an arbitrary
vector needs to be available to the minimization routine. As a result,
the user can provide either a function to compute the Hessian matrix,
or a function to compute the product of the Hessian with an arbitrary
vector.

For larger minimization problems, storing the entire Hessian matrix can
consume considerable time and memory. The Newton-CG algorithm only needs
the product of the Hessian times an arbitrary vector. As a result, the user
can supply code to compute this product rather than the full Hessian by
giving a hess function which take the minimization vector as the first
argument and the arbitrary vector as the second argument (along with extra
arguments passed to the function to be minimized). If possible, using
Newton-CG with the Hessian product option is probably the fastest way to
minimize the function.

In this case, the product of the Rosenbrock Hessian with an arbitrary
vector is not difficult to compute. If is the arbitrary
vector, then has
elements:

Code which makes use of this Hessian product to minimize the
Rosenbrock function using minimize follows:

The minimize function also provides an interface to several
constrained minimization algorithm. As an example, the Sequential Least
SQuares Programming optimization algorithm (SLSQP) will be considered here.
This algorithm allows to deal with constrained minimization problems of the
form:

As an example, let us consider the problem of maximizing the function:

All of the previously-explained minimization procedures can be used to
solve a least-squares problem provided the appropriate objective
function is constructed. For example, suppose it is desired to fit a
set of data
to a known model,
where is a vector of parameters for the model that
need to be found. A common method for determining which parameter
vector gives the best fit to the data is to minimize the sum of squares
of the residuals. The residual is usually defined for each observed
data-point as

An objective function to pass to any of the previous minization
algorithms to obtain a least-squares fit is.

The leastsq algorithm performs this squaring and summing of the
residuals automatically. It takes as an input argument the vector
function and returns the
value of which minimizes
directly. The user is also encouraged to provide the Jacobian matrix
of the function (with derivatives down the columns or across the
rows). If the Jacobian is not provided, it is estimated.

An example should clarify the usage. Suppose it is believed some
measured data follow a sinusoidal pattern

where the parameters , and are unknown. The residual vector is

By defining a function to compute the residuals and (selecting an
appropriate starting position), the least-squares fit routine can be
used to find the best-fit parameters .
This is shown in the following example:

Often only the minimum of an univariate function (i.e. a function that
takes a scalar as input) is needed. In these circumstances, other
optimization techniques have been developed that can work faster. These are
accessible from the minimize_scalar function which proposes several
algorithms.

There are actually two methods that can be used to minimize an univariate
function: brent and golden, but golden is included only for academic
purposes and should rarely be used. These can be respectively selected
through the method parameter in minimize_scalar. The brent
method uses Brent’s algorithm for locating a minimum. Optimally a bracket
(the bs parameter) should be given which contains the minimum desired. A
bracket is a triple such that and . If this is not given, then alternatively two starting points can
be chosen and a bracket will be found from these points using a simple
marching algorithm. If these two starting points are not provided 0 and
1 will be used (this may not be the right choice for your function and
result in an unexpected minimum being returned).

Very often, there are constraints that can be placed on the solution space
before minimization occurs. The bounded method in minimize_scalar
is an example of a constrained minimization procedure that provides a
rudimentary interval constraint for scalar functions. The interval
constraint allows the minimization to occur only between two fixed
endpoints, specified using the mandatory bs parameter.

For example, to find the minimum of near
, minimize_scalar can be called using the interval
as a constraint. The result is
:

Sometimes, it may be useful to use a custom method as a (multivariate
or univariate) minimizer, for example when using some library wrappers
of minimize (e.g. basinhopping).

We can achieve that by, instead of passing a method name, we pass
a callable (either a function or an object implementing a __call__
method) as the method parameter.

Let us consider an (admittedly rather virtual) need to use a trivial
custom multivariate minimization method that will just search the
neighborhood in each dimension independently with a fixed step size:

If one has a single-variable equation, there are four different root
finding algorithms that can be tried. Each of these algorithms requires the
endpoints of an interval in which a root is expected (because the function
changes signs). In general brentq is the best choice, but the other
methods may be useful in certain circumstances or for academic purposes.

A problem closely related to finding the zeros of a function is the
problem of finding a fixed-point of a function. A fixed point of a
function is the point at which evaluation of the function returns the
point: Clearly the fixed point of
is the root of
Equivalently, the root of is the fixed_point of
The routine
fixed_point provides a simple iterative method using Aitkens
sequence acceleration to estimate the fixed point of given a
starting point.

Finding a root of a set of non-linear equations can be achieve using the
root function. Several methods are available, amongst which hybr
(the default) and lm which respectively use the hybrid method of Powell
and the Levenberg-Marquardt method from MINPACK.

The following example considers the single-variable transcendental
equation

Methods hybr and lm in root cannot deal with a very large
number of variables (N), as they need to calculate and invert a dense N
x N Jacobian matrix on every Newton step. This becomes rather inefficient
when N grows.

Consider for instance the following problem: we need to solve the
following integrodifferential equation on the square
:

with the boundary condition on the upper edge and
elsewhere on the boundary of the square. This can be done
by approximating the continuous function P by its values on a grid,
, with a small grid spacing
h. The derivatives and integrals can then be approximated; for
instance . The problem is then equivalent to finding the root of
some function residual(P), where P is a vector of length
.

Now, because can be large, methods hybr or lm in
root will take a long time to solve this problem. The solution can
however be found using one of the large-scale solvers, for example
krylov, broyden2, or anderson. These use what is known as the
inexact Newton method, which instead of computing the Jacobian matrix
exactly, forms an approximation for it.

When looking for the zero of the functions ,
i = 1, 2, ..., N, the krylov solver spends most of its
time inverting the Jacobian matrix,

If you have an approximation for the inverse matrix
, you can use it for preconditioning the
linear inversion problem. The idea is that instead of solving
one solves : since
matrix is “closer” to the identity matrix than
is, the equation should be easier for the Krylov method to deal with.

For the problem in the previous section, we note that the function to
solve consists of two parts: the first one is application of the
Laplace operator, , and the second
is the integral. We can actually easily compute the Jacobian corresponding
to the Laplace operator part: we know that in one dimension

so that the whole 2-D operator is represented by

The matrix of the Jacobian corresponding to the integral
is more difficult to calculate, and since all of it entries are
nonzero, it will be difficult to invert. on the other hand
is a relatively simple matrix, and can be inverted by
scipy.sparse.linalg.splu (or the inverse can be approximated by
scipy.sparse.linalg.spilu). So we are content to take
and hope for the best.

In the example below, we use the preconditioner .

importnumpyasnpfromscipy.optimizeimportrootfromscipy.sparseimportspdiags,kronfromscipy.sparse.linalgimportspilu,LinearOperatorfromnumpyimportcosh,zeros_like,mgrid,zeros,eye# parametersnx,ny=75,75hx,hy=1./(nx-1),1./(ny-1)P_left,P_right=0,0P_top,P_bottom=1,0defget_preconditioner():"""Compute the preconditioner M"""diags_x=zeros((3,nx))diags_x[0,:]=1/hx/hxdiags_x[1,:]=-2/hx/hxdiags_x[2,:]=1/hx/hxLx=spdiags(diags_x,[-1,0,1],nx,nx)diags_y=zeros((3,ny))diags_y[0,:]=1/hy/hydiags_y[1,:]=-2/hy/hydiags_y[2,:]=1/hy/hyLy=spdiags(diags_y,[-1,0,1],ny,ny)J1=kron(Lx,eye(ny))+kron(eye(nx),Ly)# Now we have the matrix `J_1`. We need to find its inverse `M` --# however, since an approximate inverse is enough, we can use# the *incomplete LU* decompositionJ1_ilu=spilu(J1)# This returns an object with a method .solve() that evaluates# the corresponding matrix-vector product. We need to wrap it into# a LinearOperator before it can be passed to the Krylov methods:M=LinearOperator(shape=(nx*ny,nx*ny),matvec=J1_ilu.solve)returnMdefsolve(preconditioning=True):"""Compute the solution"""count=[0]defresidual(P):count[0]+=1d2x=zeros_like(P)d2y=zeros_like(P)d2x[1:-1]=(P[2:]-2*P[1:-1]+P[:-2])/hx/hxd2x[0]=(P[1]-2*P[0]+P_left)/hx/hxd2x[-1]=(P_right-2*P[-1]+P[-2])/hx/hxd2y[:,1:-1]=(P[:,2:]-2*P[:,1:-1]+P[:,:-2])/hy/hyd2y[:,0]=(P[:,1]-2*P[:,0]+P_bottom)/hy/hyd2y[:,-1]=(P_top-2*P[:,-1]+P[:,-2])/hy/hyreturnd2x+d2y+5*cosh(P).mean()**2# preconditionerifpreconditioning:M=get_preconditioner()else:M=None# solveguess=zeros((nx,ny),float)sol=root(residual,guess,method='krylov',options={'disp':True,'jac_options':{'inner_M':M}})print'Residual',abs(residual(sol.x)).max()print'Evaluations',count[0]returnsol.xdefmain():sol=solve(preconditioning=True)# visualizeimportmatplotlib.pyplotaspltx,y=mgrid[0:1:(nx*1j),0:1:(ny*1j)]plt.clf()plt.pcolor(x,y,sol)plt.clim(0,1)plt.colorbar()plt.show()if__name__=="__main__":main()

Using a preconditioner reduced the number of evaluations of the
residual function by a factor of 4. For problems where the
residual is expensive to compute, good preconditioning can be crucial
— it can even decide whether the problem is solvable in practice or
not.

Preconditioning is an art, science, and industry. Here, we were lucky
in making a simple choice that worked reasonably well, but there is a
lot more depth to this topic than is shown here.