The following are a set of methods intended for regression in which
the target value is expected to be a linear combination of the input
variables. In mathematical notion, if \(\hat{y}\) is the predicted
value.

\[\hat{y}(w, x) = w_0 + w_1 x_1 + ... + w_p x_p\]

Across the module, we designate the vector \(w = (w_1,
..., w_p)\) as coef_ and \(w_0\) as intercept_.

LinearRegression fits a linear model with coefficients
\(w = (w_1, ..., w_p)\) to minimize the residual sum
of squares between the observed responses in the dataset, and the
responses predicted by the linear approximation. Mathematically it
solves a problem of the form:

\[\min_{w} {|| X w - y||_2}^2\]

LinearRegression will take in its fit method arrays X, y
and will store the coefficients \(w\) of the linear model in its
coef_ member:

However, coefficient estimates for Ordinary Least Squares rely on the
independence of the model terms. When terms are correlated and the
columns of the design matrix \(X\) have an approximate linear
dependence, the design matrix becomes close to singular
and as a result, the least-squares estimate becomes highly sensitive
to random errors in the observed response, producing a large
variance. This situation of multicollinearity can arise, for
example, when data are collected without an experimental design.

Ridge regression addresses some of the problems of
Ordinary Least Squares by imposing a penalty on the size of
coefficients. The ridge coefficients minimize a penalized residual sum
of squares,

\[\min_{w} {{|| X w - y||_2}^2 + \alpha {||w||_2}^2}\]

Here, \(\alpha \geq 0\) is a complexity parameter that controls the amount
of shrinkage: the larger the value of \(\alpha\), the greater the amount
of shrinkage and thus the coefficients become more robust to collinearity.

As with other linear models, Ridge will take in its fit method
arrays X, y and will store the coefficients \(w\) of the linear model in
its coef_ member:

RidgeCV implements ridge regression with built-in
cross-validation of the alpha parameter. The object works in the same way
as GridSearchCV except that it defaults to Generalized Cross-Validation
(GCV), an efficient form of leave-one-out cross-validation:

The Lasso is a linear model that estimates sparse coefficients.
It is useful in some contexts due to its tendency to prefer solutions
with fewer parameter values, effectively reducing the number of variables
upon which the given solution is dependent. For this reason, the Lasso
and its variants are fundamental to the field of compressed sensing.
Under certain conditions, it can recover the exact set of non-zero
weights (see
Compressive sensing: tomography reconstruction with L1 prior (Lasso)).

Mathematically, it consists of a linear model trained with \(\ell_1\) prior
as regularizer. The objective function to minimize is:

The lasso estimate thus solves the minimization of the
least-squares penalty with \(\alpha ||w||_1\) added, where
\(\alpha\) is a constant and \(||w||_1\) is the \(\ell_1\)-norm of
the parameter vector.

The implementation in the class Lasso uses coordinate descent as
the algorithm to fit the coefficients. See Least Angle Regression
for another implementation:

For high-dimensional datasets with many collinear regressors,
LassoCV is most often preferable. However, LassoLarsCV has
the advantage of exploring more relevant values of alpha parameter, and
if the number of samples is very small compared to the number of
features, it is often faster than LassoCV.

Alternatively, the estimator LassoLarsIC proposes to use the
Akaike information criterion (AIC) and the Bayes Information criterion (BIC).
It is a computationally cheaper alternative to find the optimal value of alpha
as the regularization path is computed only once instead of k+1 times
when using k-fold cross-validation. However, such criteria needs a
proper estimation of the degrees of freedom of the solution, are
derived for large samples (asymptotic results) and assume the model
is correct, i.e. that the data are actually generated by this model.
They also tend to break when the problem is badly conditioned
(more features than samples).

The equivalence between alpha and the regularization parameter of SVM,
C is given by alpha=1/C or alpha=1/(n_samples*C),
depending on the estimator and the exact objective function optimized by the
model.

The MultiTaskLasso is a linear model that estimates sparse
coefficients for multiple regression problems jointly: y is a 2D array,
of shape (n_samples,n_tasks). The constraint is that the selected
features are the same for all the regression problems, also called tasks.

The following figure compares the location of the non-zeros in W obtained
with a simple Lasso or a MultiTaskLasso. The Lasso estimates yields
scattered non-zeros while the non-zeros of the MultiTaskLasso are full
columns.

Fitting a time-series model, imposing that any active feature be active at all times.

ElasticNet is a linear regression model trained with L1 and L2 prior
as regularizer. This combination allows for learning a sparse model where
few of the weights are non-zero like Lasso, while still maintaining
the regularization properties of Ridge. We control the convex
combination of L1 and L2 using the l1_ratio parameter.

Elastic-net is useful when there are multiple features which are
correlated with one another. Lasso is likely to pick one of these
at random, while elastic-net is likely to pick both.

A practical advantage of trading-off between Lasso and Ridge is it allows
Elastic-Net to inherit some of Ridge’s stability under rotation.

The MultiTaskElasticNet is an elastic-net model that estimates sparse
coefficients for multiple regression problems jointly: Y is a 2D array,
of shape (n_samples,n_tasks). The constraint is that the selected
features are the same for all the regression problems, also called tasks.

Mathematically, it consists of a linear model trained with a mixed
\(\ell_1\)\(\ell_2\) prior and \(\ell_2\) prior as regularizer.
The objective function to minimize is:

Least-angle regression (LARS) is a regression algorithm for
high-dimensional data, developed by Bradley Efron, Trevor Hastie, Iain
Johnstone and Robert Tibshirani. LARS is similar to forward stepwise
regression. At each step, it finds the predictor most correlated with the
response. When there are multiple predictors having equal correlation, instead
of continuing along the same predictor, it proceeds in a direction equiangular
between the predictors.

The advantages of LARS are:

It is numerically efficient in contexts where p >> n (i.e., when the
number of dimensions is significantly greater than the number of
points)

It is computationally just as fast as forward selection and has
the same order of complexity as an ordinary least squares.

It produces a full piecewise linear solution path, which is
useful in cross-validation or similar attempts to tune the model.

If two variables are almost equally correlated with the response,
then their coefficients should increase at approximately the same
rate. The algorithm thus behaves as intuition would expect, and
also is more stable.

It is easily modified to produce solutions for other estimators,
like the Lasso.

The disadvantages of the LARS method include:

Because LARS is based upon an iterative refitting of the
residuals, it would appear to be especially sensitive to the
effects of noise. This problem is discussed in detail by Weisberg
in the discussion section of the Efron et al. (2004) Annals of
Statistics article.

The LARS model can be used using estimator Lars, or its
low-level implementation lars_path.

LassoLars is a lasso model implemented using the LARS
algorithm, and unlike the implementation based on coordinate_descent,
this yields the exact solution, which is piecewise linear as a
function of the norm of its coefficients.

The algorithm is similar to forward stepwise regression, but instead
of including variables at each step, the estimated parameters are
increased in a direction equiangular to each one’s correlations with
the residual.

Instead of giving a vector result, the LARS solution consists of a
curve denoting the solution for each value of the L1 norm of the
parameter vector. The full coefficients path is stored in the array
coef_path_, which has size (n_features, max_features+1). The first
column is always zero.

OrthogonalMatchingPursuit and orthogonal_mp implements the OMP
algorithm for approximating the fit of a linear model with constraints imposed
on the number of non-zero coefficients (ie. the L 0 pseudo-norm).

Being a forward feature selection method like Least Angle Regression,
orthogonal matching pursuit can approximate the optimum solution vector with a
fixed number of non-zero elements:

OMP is based on a greedy algorithm that includes at each step the atom most
highly correlated with the current residual. It is similar to the simpler
matching pursuit (MP) method, but better in that at each iteration, the
residual is recomputed using an orthogonal projection on the space of the
previously chosen dictionary elements.

Bayesian regression techniques can be used to include regularization
parameters in the estimation procedure: the regularization parameter is
not set in a hard sense but tuned to the data at hand.

This can be done by introducing uninformative priors
over the hyper parameters of the model.
The \(\ell_{2}\) regularization used in Ridge Regression is equivalent
to finding a maximum a posteriori estimation under a Gaussian prior over the
parameters \(w\) with precision \(\lambda^{-1}\). Instead of setting
lambda manually, it is possible to treat it as a random variable to be
estimated from the data.

To obtain a fully probabilistic model, the output \(y\) is assumed
to be Gaussian distributed around \(X w\):

\[p(y|X,w,\alpha) = \mathcal{N}(y|X w,\alpha)\]

Alpha is again treated as a random variable that is to be estimated from the
data.

The advantages of Bayesian Regression are:

It adapts to the data at hand.

It can be used to include regularization parameters in the
estimation procedure.

The disadvantages of Bayesian regression include:

Inference of the model can be time consuming.

References

A good introduction to Bayesian methods is given in C. Bishop: Pattern
Recognition and Machine learning

Original Algorithm is detailed in the book Bayesian learning for neural
networks by Radford M. Neal

BayesianRidge estimates a probabilistic model of the
regression problem as described above.
The prior for the parameter \(w\) is given by a spherical Gaussian:

\[p(w|\lambda) =
\mathcal{N}(w|0,\lambda^{-1}\mathbf{I}_{p})\]

The priors over \(\alpha\) and \(\lambda\) are chosen to be gamma
distributions, the
conjugate prior for the precision of the Gaussian.

The resulting model is called Bayesian Ridge Regression, and is similar to the
classical Ridge. The parameters \(w\), \(\alpha\) and
\(\lambda\) are estimated jointly during the fit of the model. The
remaining hyperparameters are the parameters of the gamma priors over
\(\alpha\) and \(\lambda\). These are usually chosen to be
non-informative. The parameters are estimated by maximizing the marginal
log likelihood.

Instead, the distribution over \(w\) is assumed to be an axis-parallel,
elliptical Gaussian distribution.

This means each weight \(w_{i}\) is drawn from a Gaussian distribution,
centered on zero and with a precision \(\lambda_{i}\):

\[p(w|\lambda) = \mathcal{N}(w|0,A^{-1})\]

with \(diag \; (A) = \lambda = \{\lambda_{1},...,\lambda_{p}\}\).

In contrast to Bayesian Ridge Regression, each coordinate of \(w_{i}\)
has its own standard deviation \(\lambda_i\). The prior over all
\(\lambda_i\) is chosen to be the same gamma distribution given by
hyperparameters \(\lambda_1\) and \(\lambda_2\).

ARD is also known in the literature as Sparse Bayesian Learning and
Relevance Vector Machine[3][4].

Logistic regression, despite its name, is a linear model for classification
rather than regression. Logistic regression is also known in the literature as
logit regression, maximum-entropy classification (MaxEnt)
or the log-linear classifier. In this model, the probabilities describing the possible outcomes of a single trial are modeled using a logistic function.

The implementation of logistic regression in scikit-learn can be accessed from
class LogisticRegression. This implementation can fit binary, One-vs-
Rest, or multinomial logistic regression with optional L2 or L1
regularization.

Note that, in this notation, it’s assumed that the observation \(y_i\) takes values in the set
\({-1, 1}\) at trial \(i\).

The solvers implemented in the class LogisticRegression
are “liblinear”, “newton-cg”, “lbfgs”, “sag” and “saga”:

The solver “liblinear” uses a coordinate descent (CD) algorithm, and relies
on the excellent C++ LIBLINEAR library, which is shipped with
scikit-learn. However, the CD algorithm implemented in liblinear cannot learn
a true multinomial (multiclass) model; instead, the optimization problem is
decomposed in a “one-vs-rest” fashion so separate binary classifiers are
trained for all classes. This happens under the hood, so
LogisticRegression instances using this solver behave as multiclass
classifiers. For L1 penalization sklearn.svm.l1_min_c allows to
calculate the lower bound for C in order to get a non “null” (all feature
weights to zero) model.

The “lbfgs”, “sag” and “newton-cg” solvers only support L2 penalization and
are found to converge faster for some high dimensional data. Setting
multi_class to “multinomial” with these solvers learns a true multinomial
logistic regression model [5], which means that its probability estimates
should be better calibrated than the default “one-vs-rest” setting.

The “sag” solver uses a Stochastic Average Gradient descent [6]. It is faster
than other solvers for large datasets, when both the number of samples and the
number of features are large.

The “saga” solver [7] is a variant of “sag” that also supports the
non-smooth penalty=”l1” option. This is therefore the solver of choice
for sparse multinomial logistic regression.

The “lbfgs” is an optimization algorithm that approximates the
Broyden–Fletcher–Goldfarb–Shanno algorithm [8], which belongs to
quasi-Newton methods. The “lbfgs” solver is recommended for use for
small data-sets but for larger datasets its performance suffers. [9]

The following table summarizes the penalties supported by each solver:

Solvers

Penalties

‘liblinear’

‘lbfgs’

‘newton-cg’

‘sag’

‘saga’

Multinomial + L2 penalty

no

yes

yes

yes

yes

OVR + L2 penalty

yes

yes

yes

yes

yes

Multinomial + L1 penalty

no

no

no

no

yes

OVR + L1 penalty

yes

no

no

no

yes

Behaviors

Penalize the intercept (bad)

yes

no

no

no

no

Faster for large datasets

no

no

no

yes

yes

Robust to unscaled datasets

yes

yes

yes

no

no

The “lbfgs” solver is used by default for its robustness. For large datasets
the “saga” solver is usually faster.
For large dataset, you may also consider using SGDClassifier
with ‘log’ loss, which might be even faster but require more tuning.

There might be a difference in the scores obtained between
LogisticRegression with solver=liblinear
or LinearSVC and the external liblinear library directly,
when fit_intercept=False and the fit coef_ (or) the data to
be predicted are zeroes. This is because for the sample(s) with
decision_function zero, LogisticRegression and LinearSVC
predict the negative class, while liblinear predicts the positive class.
Note that a model with fit_intercept=False and having many samples with
decision_function zero, is likely to be a underfit, bad model and you are
advised to set fit_intercept=True and increase the intercept_scaling.

Note

Feature selection with sparse logistic regression

A logistic regression with L1 penalty yields sparse models, and can
thus be used to perform feature selection, as detailed in
L1-based feature selection.

LogisticRegressionCV implements Logistic Regression with
builtin cross-validation to find out the optimal C parameter.
“newton-cg”, “sag”, “saga” and “lbfgs” solvers are found to be faster
for high-dimensional dense data, due to warm-starting. For the
multiclass case, if multi_class option is set to “ovr”, an optimal C
is obtained for each class and if the multi_class option is set to
“multinomial”, an optimal C is obtained by minimizing the cross-entropy
loss.

Stochastic gradient descent is a simple yet very efficient approach
to fit linear models. It is particularly useful when the number of samples
(and the number of features) is very large.
The partial_fit method allows online/out-of-core learning.

The classes SGDClassifier and SGDRegressor provide
functionality to fit linear models for classification and regression
using different (convex) loss functions and different penalties.
E.g., with loss="log", SGDClassifier
fits a logistic regression model,
while with loss="hinge" it fits a linear support vector machine (SVM).

The passive-aggressive algorithms are a family of algorithms for large-scale
learning. They are similar to the Perceptron in that they do not require a
learning rate. However, contrary to the Perceptron, they include a
regularization parameter C.

RANSAC is a non-deterministic algorithm producing only a reasonable result with
a certain probability, which is dependent on the number of iterations (see
max_trials parameter). It is typically used for linear and non-linear
regression problems and is especially popular in the fields of photogrammetric
computer vision.

The algorithm splits the complete input sample data into a set of inliers,
which may be subject to noise, and outliers, which are e.g. caused by erroneous
measurements or invalid hypotheses about the data. The resulting model is then
estimated only from the determined inliers.

Select min_samples random samples from the original data and check
whether the set of data is valid (see is_data_valid).

Fit a model to the random subset (base_estimator.fit) and check
whether the estimated model is valid (see is_model_valid).

Classify all data as inliers or outliers by calculating the residuals
to the estimated model (base_estimator.predict(X)-y) - all data
samples with absolute residuals smaller than the residual_threshold
are considered as inliers.

Save fitted model as best model if number of inlier samples is
maximal. In case the current estimated model has the same number of
inliers, it is only considered as the best model if it has better score.

These steps are performed either a maximum number of times (max_trials) or
until one of the special stop criteria are met (see stop_n_inliers and
stop_score). The final model is estimated using all inlier samples (consensus
set) of the previously determined best model.

The is_data_valid and is_model_valid functions allow to identify and reject
degenerate combinations of random sub-samples. If the estimated model is not
needed for identifying degenerate cases, is_data_valid should be used as it
is called prior to fitting the model and thus leading to better computational
performance.

The TheilSenRegressor estimator uses a generalization of the median in
multiple dimensions. It is thus robust to multivariate outliers. Note however
that the robustness of the estimator decreases quickly with the dimensionality
of the problem. It looses its robustness properties and becomes no
better than an ordinary least squares in high dimension.

TheilSenRegressor is comparable to the Ordinary Least Squares
(OLS) in terms of asymptotic efficiency and as an
unbiased estimator. In contrast to OLS, Theil-Sen is a non-parametric
method which means it makes no assumption about the underlying
distribution of the data. Since Theil-Sen is a median-based estimator, it
is more robust against corrupted data aka outliers. In univariate
setting, Theil-Sen has a breakdown point of about 29.3% in case of a
simple linear regression which means that it can tolerate arbitrary
corrupted data of up to 29.3%.

The implementation of TheilSenRegressor in scikit-learn follows a
generalization to a multivariate linear regression model [10] using the
spatial median which is a generalization of the median to multiple
dimensions [11].

In terms of time and space complexity, Theil-Sen scales according to

\[\binom{n_{samples}}{n_{subsamples}}\]

which makes it infeasible to be applied exhaustively to problems with a
large number of samples and features. Therefore, the magnitude of a
subpopulation can be chosen to limit the time and space complexity by
considering only a random subset of all possible combinations.

The HuberRegressor is different to Ridge because it applies a
linear loss to samples that are classified as outliers.
A sample is classified as an inlier if the absolute error of that sample is
lesser than a certain threshold. It differs from TheilSenRegressor
and RANSACRegressor because it does not ignore the effect of the outliers
but gives a lesser weight to them.

HuberRegressor is scaling invariant. Once epsilon is set, scaling X and y
down or up by different values would produce the same robustness to outliers as before.
as compared to SGDRegressor where epsilon has to be set again when X and y are
scaled.

HuberRegressor should be more efficient to use on data with small number of
samples while SGDRegressor needs a number of passes on the training data to
produce the same robustness.

Also, this estimator is different from the R implementation of Robust Regression
(http://www.ats.ucla.edu/stat/r/dae/rreg.htm) because the R implementation does a weighted least
squares implementation with weights given to each sample on the basis of how much the residual is
greater than a certain threshold.

One common pattern within machine learning is to use linear models trained
on nonlinear functions of the data. This approach maintains the generally
fast performance of linear methods, while allowing them to fit a much wider
range of data.

For example, a simple linear regression can be extended by constructing
polynomial features from the coefficients. In the standard linear
regression case, you might have a model that looks like this for
two-dimensional data:

\[\hat{y}(w, x) = w_0 + w_1 x_1 + w_2 x_2\]

If we want to fit a paraboloid to the data instead of a plane, we can combine
the features in second-order polynomials, so that the model looks like this:

We see that the resulting polynomial regression is in the same class of
linear models we’d considered above (i.e. the model is linear in \(w\))
and can be solved by the same techniques. By considering linear fits within
a higher-dimensional space built with these basis functions, the model has the
flexibility to fit a much broader range of data.

Here is an example of applying this idea to one-dimensional data, using
polynomial features of varying degrees:

This figure is created using the PolynomialFeatures preprocessor.
This preprocessor transforms an input data matrix into a new data matrix
of a given degree. It can be used as follows:

The linear model trained on polynomial features is able to exactly recover
the input polynomial coefficients.

In some cases it’s not necessary to include higher powers of any single feature,
but only the so-called interaction features
that multiply together at most \(d\) distinct features.
These can be gotten from PolynomialFeatures with the setting
interaction_only=True.

For example, when dealing with boolean features,
\(x_i^n = x_i\) for all \(n\) and is therefore useless;
but \(x_i x_j\) represents the conjunction of two booleans.
This way, we can solve the XOR problem with a linear classifier: