The classes in the sklearn.feature_selection module can be used
for feature selection/dimensionality reduction on sample sets, either to
improve estimators’ accuracy scores or to boost their performance on very
high-dimensional datasets.

VarianceThreshold is a simple baseline approach to feature selection.
It removes all features whose variance doesn’t meet some threshold.
By default, it removes all zero-variance features,
i.e. features that have the same value in all samples.

As an example, suppose that we have a dataset with boolean features,
and we want to remove all features that are either one or zero (on or off)
in more than 80% of the samples.
Boolean features are Bernoulli random variables,
and the variance of such variables is given by

Univariate feature selection works by selecting the best features based on
univariate statistical tests. It can be seen as a preprocessing step
to an estimator. Scikit-learn exposes feature selection routines
as objects that implement the transform method:

Given an external estimator that assigns weights to features (e.g., the
coefficients of a linear model), recursive feature elimination (RFE)
is to select features by recursively considering smaller and smaller sets of
features. First, the estimator is trained on the initial set of features and
weights are assigned to each one of them. Then, features whose absolute weights
are the smallest are pruned from the current set features. That procedure is
recursively repeated on the pruned set until the desired number of features to
select is eventually reached.

RFECV performs RFE in a cross-validation loop to find the optimal
number of features.

Linear models penalized with the L1 norm have
sparse solutions: many of their estimated coefficients are zero. When the goal
is to reduce the dimensionality of the data to use with another classifier,
they expose a transform method to select the non-zero coefficient. In
particular, sparse estimators useful for this purpose are the
linear_model.Lasso for regression, and
of linear_model.LogisticRegression and svm.LinearSVC
for classification:

For a good choice of alpha, the Lasso can fully recover the
exact set of non-zero variables using only few observations, provided
certain specific conditions are met. In particular, the number of
samples should be “sufficiently large”, or L1 models will perform at
random, where “sufficiently large” depends on the number of non-zero
coefficients, the logarithm of the number of features, the amount of
noise, the smallest absolute value of non-zero coefficients, and the
structure of the design matrix X. In addition, the design matrix must
display certain specific properties, such as not being too correlated.

There is no general rule to select an alpha parameter for recovery of
non-zero coefficients. It can by set by cross-validation
(LassoCV or LassoLarsCV), though this may lead to
under-penalized models: including a small number of non-relevant
variables is not detrimental to prediction score. BIC
(LassoLarsIC) tends, on the opposite, to set high values of
alpha.

The limitation of L1-based sparse models is that faced with a group of
very correlated features, they will select only one. To mitigate this
problem, it is possible to use randomization techniques, reestimating the
sparse model many times perturbing the design matrix or sub-sampling data
and counting how many times a given regressor is selected.

Note that for randomized sparse models to be more powerful than standard
F statistics at detecting non-zero features, the ground truth model
should be sparse, in other words, there should be only a small fraction
of features non zero.

Tree-based estimators (see the sklearn.tree module and forest
of trees in the sklearn.ensemble module) can be used to compute
feature importances, which in turn can be used to discard irrelevant
features:

In this snippet we make use of a sklearn.svm.LinearSVC
to evaluate feature importances and select the most relevant features.
Then, a sklearn.ensemble.RandomForestClassifier is trained on the
transformed output, i.e. using only relevant features. You can perform
similar operations with the other feature selection methods and also
classifiers that provide a way to evaluate feature importances of course.
See the sklearn.pipeline.Pipeline examples for more details.