Given the residuals f(x) (an m-dimensional real function of n real
variables) and the loss function rho(s) (a scalar function), least_squares
finds a local minimum of the cost function F(x):

minimizeF(x)=0.5*sum(rho(f_i(x)**2),i=0,...,m-1)subjecttolb<=x<=ub

The purpose of the loss function rho(s) is to reduce the influence of
outliers on the solution.

Parameters:

fun : callable

Function which computes the vector of residuals, with the signature
fun(x,*args,**kwargs), i.e., the minimization proceeds with
respect to its first argument. The argument x passed to this
function is an ndarray of shape (n,) (never a scalar, even for n=1).
It must return a 1-d array_like of shape (m,) or a scalar. If the
argument x is complex or the function fun returns complex
residuals, it must be wrapped in a real function of real arguments,
as shown at the end of the Examples section.

x0 : array_like with shape (n,) or float

Initial guess on independent variables. If float, it will be treated
as a 1-d array with one element.

jac : {‘2-point’, ‘3-point’, ‘cs’, callable}, optional

Method of computing the Jacobian matrix (an m-by-n matrix, where
element (i, j) is the partial derivative of f[i] with respect to
x[j]). The keywords select a finite difference scheme for numerical
estimation. The scheme ‘3-point’ is more accurate, but requires
twice as much operations compared to ‘2-point’ (default). The
scheme ‘cs’ uses complex steps, and while potentially the most
accurate, it is applicable only when fun correctly handles
complex inputs and can be analytically continued to the complex
plane. Method ‘lm’ always uses the ‘2-point’ scheme. If callable,
it is used as jac(x,*args,**kwargs) and should return a
good approximation (or the exact value) for the Jacobian as an
array_like (np.atleast_2d is applied), a sparse matrix or a
scipy.sparse.linalg.LinearOperator.

bounds : 2-tuple of array_like, optional

Lower and upper bounds on independent variables. Defaults to no bounds.
Each array must match the size of x0 or be a scalar, in the latter
case a bound will be the same for all variables. Use np.inf with
an appropriate sign to disable bounds on all or some variables.

‘dogbox’ : dogleg algorithm with rectangular trust regions,
typical use case is small problems with bounds. Not recommended
for problems with rank-deficient Jacobian.

‘lm’ : Levenberg-Marquardt algorithm as implemented in MINPACK.
Doesn’t handle bounds and sparse Jacobians. Usually the most
efficient method for small unconstrained problems.

Default is ‘trf’. See Notes for more information.

ftol : float, optional

Tolerance for termination by the change of the cost function. Default
is 1e-8. The optimization process is stopped when dF<ftol*F,
and there was an adequate agreement between a local quadratic model and
the true model in the last step.

xtol : float, optional

Tolerance for termination by the change of the independent variables.
Default is 1e-8. The exact condition depends on the method used:

For ‘trf’ and ‘dogbox’ : norm(dx)<xtol*(xtol+norm(x))

For ‘lm’ : Delta<xtol*norm(xs), where Delta is
a trust-region radius and xs is the value of x
scaled according to x_scale parameter (see below).

gtol : float, optional

Tolerance for termination by the norm of the gradient. Default is 1e-8.
The exact condition depends on a method used:

For ‘trf’ : norm(g_scaled,ord=np.inf)<gtol, where
g_scaled is the value of the gradient scaled to account for
the presence of the bounds [STIR].

For ‘dogbox’ : norm(g_free,ord=np.inf)<gtol, where
g_free is the gradient with respect to the variables which
are not in the optimal state on the boundary.

For ‘lm’ : the maximum absolute value of the cosine of angles
between columns of the Jacobian and the residual vector is less
than gtol, or the residual vector is zero.

x_scale : array_like or ‘jac’, optional

Characteristic scale of each variable. Setting x_scale is equivalent
to reformulating the problem in scaled variables xs=x/x_scale.
An alternative view is that the size of a trust region along j-th
dimension is proportional to x_scale[j]. Improved convergence may
be achieved by setting x_scale such that a step of a given size
along any of the scaled variables has a similar effect on the cost
function. If set to ‘jac’, the scale is iteratively updated using the
inverse norms of the columns of the Jacobian matrix (as described in
[JJMore]).

loss : str or callable, optional

Determines the loss function. The following keyword values are allowed:

‘arctan’ : rho(z)=arctan(z). Limits a maximum loss on
a single residual, has properties similar to ‘cauchy’.

If callable, it must take a 1-d ndarray z=f**2 and return an
array_like with shape (3, m) where row 0 contains function values,
row 1 contains first derivatives and row 2 contains second
derivatives. Method ‘lm’ supports only ‘linear’ loss.

f_scale : float, optional

Value of soft margin between inlier and outlier residuals, default
is 1.0. The loss function is evaluated as follows
rho_(f**2)=C**2*rho(f**2/C**2), where C is f_scale,
and rho is determined by loss parameter. This parameter has
no effect with loss='linear', but for other loss values it is
of crucial importance.

max_nfev : None or int, optional

Maximum number of function evaluations before the termination.
If None (default), the value is chosen automatically:

Determines the relative step size for the finite difference
approximation of the Jacobian. The actual step is computed as
x*diff_step. If None (default), then diff_step is taken to be
a conventional “optimal” power of machine epsilon for the finite
difference scheme used [NR].

tr_solver : {None, ‘exact’, ‘lsmr’}, optional

Method for solving trust-region subproblems, relevant only for ‘trf’
and ‘dogbox’ methods.

‘exact’ is suitable for not very large problems with dense
Jacobian matrices. The computational complexity per iteration is
comparable to a singular value decomposition of the Jacobian
matrix.

‘lsmr’ is suitable for problems with sparse and large Jacobian
matrices. It uses the iterative procedure
scipy.sparse.linalg.lsmr for finding a solution of a linear
least-squares problem and only requires matrix-vector product
evaluations.

If None (default) the solver is chosen based on the type of Jacobian
returned on the first iteration.

tr_options : dict, optional

Keyword options passed to trust-region solver.

tr_solver='exact': tr_options are ignored.

tr_solver='lsmr': options for scipy.sparse.linalg.lsmr.
Additionally method='trf' supports ‘regularize’ option
(bool, default is True) which adds a regularization term to the
normal equation, which improves convergence if the Jacobian is
rank-deficient [Byrd] (eq. 3.4).

jac_sparsity : {None, array_like, sparse matrix}, optional

Defines the sparsity structure of the Jacobian matrix for finite
difference estimation, its shape must be (m, n). If the Jacobian has
only few non-zero elements in each row, providing the sparsity
structure will greatly speed up the computations [Curtis]. A zero
entry means that a corresponding element in the Jacobian is identically
zero. If provided, forces the use of ‘lsmr’ trust-region solver.
If None (default) then dense differencing will be used. Has no effect
for ‘lm’ method.

Method ‘lm’ (Levenberg-Marquardt) calls a wrapper over least-squares
algorithms implemented in MINPACK (lmder, lmdif). It runs the
Levenberg-Marquardt algorithm formulated as a trust-region type algorithm.
The implementation is based on paper [JJMore], it is very robust and
efficient with a lot of smart tricks. It should be your first choice
for unconstrained problems. Note that it doesn’t support bounds. Also
it doesn’t work when m < n.

Method ‘trf’ (Trust Region Reflective) is motivated by the process of
solving a system of equations, which constitute the first-order optimality
condition for a bound-constrained minimization problem as formulated in
[STIR]. The algorithm iteratively solves trust-region subproblems
augmented by a special diagonal quadratic term and with trust-region shape
determined by the distance from the bounds and the direction of the
gradient. This enhancements help to avoid making steps directly into bounds
and efficiently explore the whole space of variables. To further improve
convergence, the algorithm considers search directions reflected from the
bounds. To obey theoretical requirements, the algorithm keeps iterates
strictly feasible. With dense Jacobians trust-region subproblems are
solved by an exact method very similar to the one described in [JJMore]
(and implemented in MINPACK). The difference from the MINPACK
implementation is that a singular value decomposition of a Jacobian
matrix is done once per iteration, instead of a QR decomposition and series
of Givens rotation eliminations. For large sparse Jacobians a 2-d subspace
approach of solving trust-region subproblems is used [STIR], [Byrd].
The subspace is spanned by a scaled gradient and an approximate
Gauss-Newton solution delivered by scipy.sparse.linalg.lsmr. When no
constraints are imposed the algorithm is very similar to MINPACK and has
generally comparable performance. The algorithm works quite robust in
unbounded and bounded problems, thus it is chosen as a default algorithm.

Method ‘dogbox’ operates in a trust-region framework, but considers
rectangular trust regions as opposed to conventional ellipsoids [Voglis].
The intersection of a current trust region and initial bounds is again
rectangular, so on each iteration a quadratic minimization problem subject
to bound constraints is solved approximately by Powell’s dogleg method
[NumOpt]. The required Gauss-Newton step can be computed exactly for
dense Jacobians or approximately by scipy.sparse.linalg.lsmr for large
sparse Jacobians. The algorithm is likely to exhibit slow convergence when
the rank of Jacobian is less than the number of variables. The algorithm
often outperforms ‘trf’ in bounded problems with a small number of
variables.

Robust loss functions are implemented as described in [BA]. The idea
is to modify a residual vector and a Jacobian matrix on each iteration
such that computed gradient and Gauss-Newton Hessian approximation match
the true gradient and Hessian approximation of the cost function. Then
the algorithm proceeds in a normal way, i.e. robust loss functions are
implemented as a simple wrapper over standard least-squares algorithms.

Notice that we only provide the vector of the residuals. The algorithm
constructs the cost function as a sum of squares of the residuals, which
gives the Rosenbrock function. The exact minimum is at x=[1.0,1.0].

We now constrain the variables, in such a way that the previous solution
becomes infeasible. Specifically, we require that x[1]>=1.5, and
x[0] left unconstrained. To this end, we specify the bounds parameter
to least_squares in the form bounds=([-np.inf,1.5],np.inf).

Let’s also solve a curve fitting problem using robust loss function to
take care of outliers in the data. Define the model function as
y=a+b*exp(c*t), where t is a predictor variable, y is an
observation and a, b, c are parameters to estimate.

First, define the function which generates the data with noise and
outliers, define the model parameters, and generate data:

And finally plot all the curves. We see that by selecting an appropriate
loss we can get estimates close to optimal even in the presence of
strong outliers. But keep in mind that generally it is recommended to try
‘soft_l1’ or ‘huber’ losses first (if at all necessary) as the other two
options may cause difficulties in optimization process.