In the following function calls,
x is a column vector of the same type and dimension as
xlow.
The function call f(x) returns the value of the objective function
at x as a scalar with the same type as x.
The function call df(x) returns the derivative of the
objective function at x as a row vector with the same type as
x and with the same number of columns as x has rows.
The function call g(x) returns the value of the constraints at the
point x as a column vector with the same type as x.
The function call dg(x) returns is the
derivative of the constraints at x as a matrix with the same
type as x,
the same number of rows as g(x),
and the same number of columns as x has rows.
Element (i, j) of dg(x) is the partial derivative of the
i-th component of g with respect to the
j-th component of x.

The integer scalar neq specifies the number of equality
constraints and must be less than or equal to the number of rows of
g(x).
The real or double-precision column vector xlow specifies the
lower limits for the minimization problem.
The vector xup specifies the upper limits for the minimization
problem and has the same type and dimension as xlow.
Either xup(i) is equal to xlow(i) or
the value xup(i) - xlow(i) is used to scale the problem with
respect to the i-th component of x.
Thus, if there are no upper and lower limits,
xlow and xup should be set to give a meaningful
scaling for the problem.
The vector xini specifies the point at which the
successive quadratic programming method is started.
It has the same type and dimension as xlow and must be
between xlow and xup.
The integer scalar maxitr specifies the maximum number of
iterations to try before giving up on convergence of the method.
The scalar eps has the same type as
xlow and specifies the convergence criteria.
Convergence is achieved when the solution of the approximate
quadratic programming subproblem
is within eps times the upper minus the lower limit; i.e.,
|xout(i) - xmin(i)| <eps (xup(i) - xlow(i))for all i where xmin is the solution of the
quadratic programming subproblem that approximates the original
problem near the point xout.

The input values of xout, ygout, ylimout
and flagout do not matter.
The column vector xout
is set to the approximate solution and has the same type and
dimension as xlow.
The row vector ygout is set to the Lagrange multipliers
corresponding to the constraints defined using the function g
and have the same type as xlow and the same number of columns as
g(x) has rows.
The row vector ylimout
is set to the Lagrange multipliers corresponding to the limits
xlow and xup.
It has the same type as xlow and the same number of columns as
xlow has rows.
The character row vector flagout
is set to
"converged" if the eps criteria is achieved,
"line search failed" if the method fails during a line search,
and "max iterations" is the maximum number of iterations is reached
with out achieving convergence.
The i-th column of the return
value from sqp contains the value of x at the
i-th iteration.
The return value has the same type and row dimension as xlow.

The integer scalar level specifies the level of tracing
inside of the function sqp.
If level> 1, the text "Beginning sqp"
and the value of
xlow,
xini,
xup,
f, and
g
are printed before attempting to solve the problem.
The following values are printed at each iteration:
the iteration number,
the current value of f(x),
the current value of the penalty function,
and the maximum scaled difference between the
approximate solution and the current point.
In addition, the value of flagout
is printed just before sqp returns.
If level> 2,
the following value are printed at each iteration:
the difference between the approximate subproblem solution
and the current point,
the value of g(x),
the Lagrange multiplier values used in the penalty function,
and the partial of the Lagrangian with respect to x.
In addition, the line search is traced with lam
being the line search step size,
dp being the change in the penalty function,
and dapx being the corresponding change in the
penalty function for the approximation.
If level> 3,
the value of the approximation to the Hessian of the
Lagrangian is printed at each iteration.
If level> 4,
the tracing level
level - 3, is used inside of Qpro
when it solves the approximate subproblem.

Example
The following program solves the problem
minimize x(1)^2 + x(2)^2 with respect to x
subject to 1 - x(1) = 0Note that the solution to this problem is x(1) = 1
and x(2) = 0.
The lower and upper limits for x are chosen large
enough so that they should not be active during the
optimization process.