Lasso was originally formulated for least squares models and this simple case reveals a substantial amount about the behavior of the estimator, including its relationship to ridge regression and best subset selection and the connections between lasso coefficient estimates and so-called soft thresholding. It also reveals that (like standard linear regression) the coefficient estimates need not be unique if covariates are collinear.

Lasso was introduced in order to improve the prediction accuracy and interpretability of regression models by altering the model fitting process to select only a subset of the provided covariates for use in the final model rather than using all of them.[2][4] It was developed independently in geophysics, based on prior work that used the ℓ1{\displaystyle \ell ^{1}} penalty for both fitting and penalization of the coefficients, and by the statistician, Robert Tibshirani based on Breiman’s nonnegative garrote.[4][5]

Prior to lasso, the most widely used method for choosing which covariates to include was stepwise selection, which only improves prediction accuracy in certain cases, such as when only a few covariates have a strong relationship with the outcome. However, in other cases, it can make prediction error worse. Also, at the time, ridge regression was the most popular technique for improving prediction accuracy. Ridge regression improves prediction error by shrinking large regression coefficients in order to reduce overfitting, but it does not perform covariate selection and therefore does not help to make the model more interpretable.

Lasso is able to achieve both of these goals by forcing the sum of the absolute value of the regression coefficients to be less than a fixed value, which forces certain coefficients to be set to zero, effectively choosing a simpler model that does not include those coefficients. This idea is similar to ridge regression, in which the sum of the squares of the coefficients is forced to be less than a fixed value, though in the case of ridge regression, this only shrinks the size of the coefficients, it does not set any of them to zero.

Lasso was originally introduced in the context of least squares, and it can be instructive to consider this case first, since it illustrates many of lasso’s properties in a straightforward setting.

Consider a sample consisting of N cases, each of which consists of p covariates and a single outcome. Let yi{\displaystyle y_{i}} be the outcome and xi:=(x1,x2,…,xp)T{\displaystyle x_{i}:=(x_{1},x_{2},\ldots ,x_{p})^{T}} be the covariate vector for the ith case. Then the objective of lasso is to solve

Here t{\displaystyle t} is a prespecified free parameter that determines the amount of regularisation. Letting X{\displaystyle X} be the covariate matrix, so that Xij=(xi)j{\displaystyle X_{ij}=(x_{i})_{j}} and xiT{\displaystyle x_{i}^{T}} is the ith row of X{\displaystyle X}, the expression can be written more compactly as

Denoting the scalar mean of the data points xi{\displaystyle x_{i}} by x¯{\displaystyle {\bar {x}}} and the mean of the response variables yi{\displaystyle y_{i}} by y¯{\displaystyle {\bar {y}}}, the resulting estimate for β0{\displaystyle \beta _{0}} will end up being β^0=y¯−x¯Tβ{\displaystyle {\hat {\beta }}_{0}={\bar {y}}-{\bar {x}}^{T}\beta }, so that

and therefore it is standard to work with variables that have been centered (made zero-mean). Additionally, the covariates are typically standardized(∑i=1Nxi2=1){\displaystyle \textstyle \left(\sum _{i=1}^{N}x_{i}^{2}=1\right)} so that the solution does not depend on the measurement scale.

Assuming first that the covariates are orthonormal so that (xi∣xj)=δij{\displaystyle (x_{i}\mid x_{j})=\delta _{ij}}, where (⋅∣⋅){\displaystyle (\cdot \mid \cdot )} is the inner product and δij{\displaystyle \delta _{ij}} is the Kronecker delta, or, equivalently, XTX=I{\displaystyle X^{T}X=I}, then using subgradient methods it can be shown that

Sα{\displaystyle S_{\alpha }} is referred to as the soft thresholding operator, since it translates values towards zero (making them exactly zero if they are small enough) instead of setting smaller values to zero and leaving larger ones untouched as the hard thresholding operator, often denoted Hα{\displaystyle H_{\alpha }}, would.

where ‖⋅‖0{\displaystyle \|\cdot \|_{0}} is the "ℓ0{\displaystyle \ell ^{0}} norm", which is defined as ‖z‖=m{\displaystyle \|z\|=m} if exactly m components of z are nonzero. In this case, it can be shown that

where Hα{\displaystyle H_{\alpha }} is the so-called hard thresholding function and I{\displaystyle \mathrm {I} } is an indicator function (it is 1 if its argument is true and 0 otherwise).

Therefore, the lasso estimates share features of the estimates from both ridge and best subset selection regression since they both shrink the magnitude of all the coefficients, like ridge regression, but also set some of them to zero, as in the best subset selection case. Additionally, while ridge regression scales all of the coefficients by a constant factor, lasso instead translates the coefficients towards zero by a constant value and sets them to zero if they reach it.

Returning to the general case, in which the different covariates may not be independent, a special case may be considered in which two of the covariates, say j and k, are identical for each case, so that x(j)=x(k){\displaystyle x_{(j)}=x_{(k)}}, where x(j),i=xij{\displaystyle x_{(j),i}=x_{ij}}. Then the values of βj{\displaystyle \beta _{j}} and βk{\displaystyle \beta _{k}} that minimize the lasso objective function are not uniquely determined. In fact, if there is some solution β^{\displaystyle {\hat {\beta }}} in which β^jβ^k≥0{\displaystyle {\hat {\beta }}_{j}{\hat {\beta }}_{k}\geq 0}, then if s∈[0,1]{\displaystyle s\in [0,1]} replacing β^j{\displaystyle {\hat {\beta }}_{j}} by s(β^j+β^k){\displaystyle s({\hat {\beta }}_{j}+{\hat {\beta }}_{k})} and β^k{\displaystyle {\hat {\beta }}_{k}} by (1−s)(β^j+β^k){\displaystyle (1-s)({\hat {\beta }}_{j}+{\hat {\beta }}_{k})}, while keeping all the other β^i{\displaystyle {\hat {\beta }}_{i}} fixed, gives a new solution, so the lasso objective function then has a continuum of valid minimizers.[6] Several variants of the lasso, including the Elastic Net, have been designed to address this shortcoming, which are discussed below.

As discussed above, lasso can set coefficients to zero, while ridge regression, which appears superficially similar, cannot. This is due to the difference in the shape of the constraint boundaries in the two cases. Both lasso and ridge regression can be interpreted as minimizing the same objective function

but with respect to different constraints: ‖β‖1≤t{\displaystyle \|\beta \|_{1}\leq t} for lasso and ‖β‖22≤t{\displaystyle \|\beta \|_{2}^{2}\leq t} for ridge. From the figure, one can see that the constraint region defined by the ℓ1{\displaystyle \ell ^{1}} norm is a square rotated so that its corners lie on the axes (in general a cross-polytope), while the region defined by the ℓ2{\displaystyle \ell ^{2}} norm is a circle (in general an n-sphere), which is rotationallyinvariant and, therefore, has no corners. As seen in the figure, a convex object that lies tangent to the boundary, such as the line shown, is likely to encounter a corner (or a higher-dimensional equivalent) of a hypercube, for which some components of β{\displaystyle \beta } are identically zero, while in the case of an n-sphere, the points on the boundary for which some of the components of β{\displaystyle \beta } are zero are not distinguished from the others and the convex object is no more likely to contact a point at which some components of β{\displaystyle \beta } are zero than one for which none of them are.

Laplace distributions are sharply peaked at their mean with more probability density concentrated there compared to a normal distribution.

Just as ridge regression can be interpreted as linear regression for which the coefficients have been assigned normal prior distributions, lasso can be interpreted as linear regression for which the coefficients have Laplace prior distributions. The Laplace distribution is sharply peaked at zero (its first derivative is discontinuous) and it concentrates its probability mass closer to zero than does the normal distribution. This provides an alternative explanation of why lasso tends to set some coefficients to zero, while ridge regression does not.[2]

Lasso can also be viewed as a convex relaxation of the best subset selection regression problem, which is to find the subset of ≤k{\displaystyle \leq k} covariates that results in the smallest value of the objective function for some fixed k≤n{\displaystyle k\leq n}, where n is the total number of covariates. The "ℓ0{\displaystyle \ell ^{0}} norm", ‖⋅‖0{\displaystyle \|\cdot \|_{0}}, which gives the number of nonzero entries of a vector, is the limiting case of "ℓp{\displaystyle \ell ^{p}} norms", of the form ‖x‖p=(∑i=1n|xj|p)1/p{\displaystyle \textstyle \|x\|_{p}=\left(\sum _{i=1}^{n}|x_{j}|^{p}\right)^{1/p}} (where the quotation marks signify that these are not really norms for p<1{\displaystyle p<1} since ‖⋅‖p{\displaystyle \|\cdot \|_{p}} is not convex for p<1{\displaystyle p<1}, so the triangle inequality does not hold). Therefore, since p = 1 is the smallest value for which the "ℓp{\displaystyle \ell ^{p}} norm" is convex (and therefore actually a norm), lasso is, in some sense, the best convex approximation to the best subset selection problem, since the region defined by ‖x‖1≤t{\displaystyle \|x\|_{1}\leq t} is the convex hull of the region defined by ‖x‖p≤t{\displaystyle \|x\|_{p}\leq t} for p<1{\displaystyle p<1}.

A number of lasso variants have been created in order to remedy certain limitations of the original technique and to make the method more useful for particular problems. Almost all of these focus on respecting or utilizing different types of dependencies among the covariates. Elastic net regularization adds an additional ridge regression-like penalty which improves performance when the number of predictors is larger than the sample size, allows the method to select strongly correlated variables together, and improves overall prediction accuracy.[6] Group lasso allows groups of related covariates to be selected as a single unit, which can be useful in settings where it does not make sense to include some covariates without others.[7] Further extensions of group lasso to perform variable selection within individual groups (sparse group lasso) and to allow overlap between groups (overlap group lasso) have also been developed.[8][9] Fused lasso can account for the spatial or temporal characteristics of a problem, resulting in estimates that better match the structure of the system being studied.[10] Lasso regularized models can be fit using a variety of techniques including subgradient methods, least-angle regression (LARS), and proximal gradient methods. Determining the optimal value for the regularization parameter is an important part of ensuring that the model performs well; it is typically chosen using cross-validation.

In 2005, Zou and Hastie introduced the elastic net to address several shortcomings of lasso.[6] When p > n (the number of covariates is greater than the sample size) lasso can select only n covariates (even when more are associated with the outcome) and it tends to select only one covariate from any set of highly correlated covariates. Additionally, even when n > p, if the covariates are strongly correlated, ridge regression tends to perform better.

So the result of the elastic net penalty is a combination of the effects of the lasso and Ridge penalties.

Returning to the general case, the fact that the penalty function is now strictly convex means that if x(j)=x(k){\displaystyle x_{(j)}=x_{(k)}}, β^j=β^k{\displaystyle {\hat {\beta }}_{j}={\hat {\beta }}_{k}}, which is a change from lasso.[6] In general, if β^jβk^>0{\displaystyle {\hat {\beta }}_{j}{\hat {\beta _{k}}}>0}

is the sample correlation matrix because the x{\displaystyle x} 's are normalized.

Therefore, highly correlated covariates will tend to have similar regression coefficients, with the degree of similarity depending on both ‖y‖1{\displaystyle \|y\|_{1}} and λ2{\displaystyle \lambda _{2}}, which is very different from lasso. This phenomenon, in which strongly correlated covariates have similar regression coefficients, is referred to as the grouping effect and is generally considered desirable since, in many applications, such as identifying genes associated with a disease, one would like to find all the associated covariates, rather than selecting only one from each set of strongly correlated covariates, as lasso often does.[6] In addition, selecting only a single covariate from each group will typically result in increased prediction error, since the model is less robust (which is why ridge regression often outperforms lasso).

In 2006, Yuan and Lin introduced the group lasso in order to allow predefined groups of covariates to be selected into or out of a model together, so that all the members of a particular group are either included or not included.[7] While there are many settings in which this is useful, perhaps the most obvious is when levels of a categorical variable are coded as a collection of binary covariates. In this case, it often doesn't make sense to include only a few levels of the covariate; the group lasso can ensure that all the variables encoding the categorical covariate are either included or excluded from the model together. Another setting in which grouping is natural is in biological studies. Since genes and proteins often lie in known pathways, an investigator may be more interested in which pathways are related to an outcome than whether particular individual genes are. The objective function for the group lasso is a natural generalization of the standard lasso objective

where the design matrixX{\displaystyle X} and covariate vector β{\displaystyle \beta } have been replaced by a collection of design matrices Xj{\displaystyle X_{j}} and covariate vectors βj{\displaystyle \beta _{j}}, one for each of the J groups. Additionally, the penalty term is now a sum over ℓ2{\displaystyle \ell ^{2}} norms defined by the positive definite matrices Kj{\displaystyle K_{j}}. If each covariate is in its own group and Kj=I{\displaystyle K_{j}=I}, then this reduces to the standard lasso, while if there is only a single group and K1=I{\displaystyle K_{1}=I}, it reduces to ridge regression. Since the penalty reduces to an ℓ2{\displaystyle \ell ^{2}} norm on the subspaces defined by each group, it cannot select out only some of the covariates from a group, just as ridge regression cannot. However, because the penalty is the sum over the different subspace norms, as in the standard lasso, the constraint has some non-differential points, which correspond to some subspaces being identically zero. Therefore, it can set the coefficient vectors corresponding to some subspaces to zero, while only shrinking others. However, it is possible to extend the group lasso to the so-called sparse group lasso, which can select individual covariates within a group, by adding an additional ℓ1{\displaystyle \ell ^{1}} penalty to each group subspace.[8] Another extension, group lasso with Overlap allows covariates to be shared between different groups, e.g. if a gene were to occur in two pathways.[9]

In some cases, the object being studied may have important spatial or temporal structure that must be accounted for during analysis, such as time series or image based data. In 2005, Tibshirani and colleagues introduced the Fused lasso to extend the use of lasso to exactly this type of data.[10] The fused lasso objective function is

The first constraint is just the typical lasso constraint, but the second directly penalizes large changes with respect to the temporal or spatial structure, which forces the coefficients to vary in a smooth fashion that reflects the underlying logic of the system being studied. Clustered lasso[11] is a generalization to fused lasso that identifies and groups relevant covariates based on their effects (coefficients). The basic idea is to penalize the differences between the coefficients so that nonzero ones make clusters together. This can be modeled using the following regularization:

An example of a PQSQ (piece-wise quadratic function of subquadratic growth) potential function u(x){\displaystyle u(x)}; here the majorant function is f(x)=x{\displaystyle f(x)=x}; the potential is defined with trimming after r3{\displaystyle r_{3}}.

An example how efficient PQSQ regularized regression works just as ℓ1{\displaystyle \ell ^{1}}-norm lasso.[13]

Lasso, elastic net, group and fused lasso construct the penalty functions from the ℓ1{\displaystyle \ell ^{1}} and ℓ2{\displaystyle \ell ^{2}} norms (with weights, if necessary). The bridge regression utilises general ℓp{\displaystyle \ell ^{p}} norms (p≥1{\displaystyle p\geq 1}) and quasinorms (0<p<1{\displaystyle 0<p<1}).[14] For example, for p=1/2 the analogue of lasso objective in the Lagrangian form is to solve

It is claimed that the fractional quasi-norms ℓp{\displaystyle \ell ^{p}} (0<p<1{\displaystyle 0<p<1}) provide more meaningful results in data analysis both from the theoretical and empirical perspective.[15] But non-convexity of these quasi-norms causes difficulties in solution of the optimization problem. To solve this problem, an expectation-minimization procedure is developed[16] and implemented[13] for minimization of function

Though the lasso penalty is not differentiable, a wide variety of techniques from convex analysis and optimization theory have been developed to extremize such functions. These include coordinate descent [17], subgradient methods, least-angle regression (LARS), and proximal gradient methods.[18] Subgradient methods, are the natural generalization of traditional methods such as gradient descent and stochastic gradient descent to the case in which the objective function is not differentiable at all points. LARS is a method that is closely tied to lasso models, and in many cases allows them to be fit very efficiently, though it may not perform well in all circumstances. Proximal methods have become popular because of their flexibility and performance and are an area of active research. The choice of method will depend on the particular lasso variant being used, the data, and the available resources. However, proximal methods will generally perform well in most circumstances.

In addition to fitting the parameters, choosing the regularization parameter is also a fundamental part of using lasso. Selecting it well is essential to the performance of lasso since it controls the strength of shrinkage and variable selection, which, in moderation can improve both prediction and interpretability. However, if the regularization becomes too strong, important variables may be left out of the model and coefficients may be shrunk excessively, which can harm both predictive capacity and the inferences drawn about the system being studied. LARS is unique in this regard as it generates complete regularization paths which makes determining the optimal value of the regularization parameter much more straightforward.[18] With other methods, cross-validation is typically used to select the parameter. Additionally, a variety of heuristics related to choosing the regularization and optimization parameters are often used in order to attempt to improve performance further.