Tutorial for the Optimization Toolbox™

This example shows how to use two nonlinear optimization solvers and how to set options. The nonlinear solvers that we use in this example are fminunc and fmincon.

All the principles outlined in this example apply to the other nonlinear solvers, such as fgoalattain, fminimax, lsqnonlin, lsqcurvefit, and fsolve.

The example starts with minimizing an objective function, then proceeds to minimize the same function with additional parameters. After that, the example shows how to minimize the objective function when there is a constraint, and finally shows how to get a more efficient and/or accurate solution by providing gradients or Hessian, or by changing some options.

The plot shows that the lowest value of the objective function within the ellipse occurs near the lower right part of the ellipse. We are about to calculate the minimum that was just plotted. Take a guess at the solution:

x0 = [-2 1];

Set optimization options: use the interior-point algorithm, and turn on the display of results at each iteration:

Solvers require that nonlinear constraint functions give two outputs: one for nonlinear inequalities, the second for nonlinear equalities. So we write the constraint using the deal function to give both outputs:

gfun = @(x) deal(g(x(1),x(2)),[]);

Call the nonlinear constrained solver. There are no linear equalities or inequalities or bounds, so pass [ ] for those arguments:

Since c(x) is close to 0, the constraint is "active," meaning the constraint affects the solution. Recall the unconstrained solution was found at

uncx

uncx =
-0.6691
0.0000

and the unconstrained objective function was found to be

uncf

uncf =
-0.4052

The constraint moved the solution, and increased the objective by

fval-uncf

ans =
0.1603

Constrained Optimization Example: User-Supplied Gradients

Optimization problems can be solved more efficiently and accurately if gradients are supplied by the user. This example shows how this may be performed. We again solve the inequality-constrained problem

To provide the gradient of f(x) to fmincon, we write the objective function in the form of a MATLAB file:

fmincon estimated gradients well in the previous example, so the iterations in the current example are similar.

The solution to this problem has been found at:

xold = x

xold =
-0.9727
0.4686

The function value at the solution is:

minfval = fval

minfval =
-0.2449

The total number of function evaluations was:

Fgradevals = output.funcCount

Fgradevals =
14

Compare this to the number of function evaluations without gradients:

Fevals

Fevals =
38

Changing the Default Termination Tolerances

This time we solve the same constrained problem

more accurately by overriding the default termination criteria (options.StepTolerance and options.OptimalityTolerance). We continue to use gradients. The default values for fmincon's interior-point algorithm are options.StepTolerance = 1e-10, options.OptimalityTolerance = 1e-6.

We now choose to see more decimals in the solution, in order to see more accurately the difference that the new tolerances make.

format long

The optimizer found a solution at:

x

x =
-0.972742227363546
0.468569289098342

Compare this to the previous value:

xold

xold =
-0.972742694488360
0.468569966693330

The change is

x - xold

ans =
1.0e-06 *
0.467124813385844
-0.677594988729435

The function value at the solution is:

fval

fval =
-0.244893137879894

The solution improved by

fval - minfval

ans =
-3.996450220755676e-07

(this is negative since the new solution is smaller)

The total number of function evaluations was:

output.funcCount

ans =
15

Compare this to the number of function evaluations when the problem is solved with user-provided gradients but with the default tolerances:

Fgradevals

Fgradevals =
14

Constrained Optimization Example with User-Supplied Hessian

If you give not only a gradient, but also a Hessian, solvers are even more accurate and efficient.

fmincon's interior-point solver takes a Hessian matrix as a separate function (not part of the objective function). The Hessian function H(x,lambda) should evaluate the Hessian of the Lagrangian; see the User's Guide for the definition of this term.

Solvers calculate the values lambda.ineqnonlin and lambda.eqlin; your Hessian function tells solvers how to use these values.

In this problem we have but one inequality constraint, so the Hessian is: