If an integer is given, it fixes the number of points on the
grids of alpha to be used. If a list is given, it gives the
grid to be used. See the notes in the class docstring for
more details.

n_refinements: strictly positive integer :

The number of times the grid is refined. Not used if explicit
values of alphas are passed.

cv : cross-validation generator, optional

see sklearn.cross_validation module. If None is passed, defaults to
a 3-fold strategy

tol: positive float, optional :

The tolerance to declare convergence: if the dual gap goes below
this value, iterations are stopped.

max_iter: integer, optional :

Maximum number of iterations.

mode: {‘cd’, ‘lars’} :

The Lasso solver to use: coordinate descent or LARS. Use LARS for
very sparse underlying graphs, where number of features is greater
than number of samples. Elsewhere prefer cd which is more numerically
stable.

n_jobs: int, optional :

number of jobs to run in parallel (default 1).

verbose: boolean, optional :

If verbose is True, the objective function and duality gap are
printed at each iteration.

assume_centered : Boolean

If True, data are not centered before computation.
Useful when working with data whose mean is almost, but not exactly
zero.
If False, data are centered before computation.

The search for the optimal penalization parameter (alpha) is done on an
iteratively refined grid: first the cross-validated scores on a grid are
computed, then a new refined grid is centered around the maximum, and so
on.

One of the challenges which is faced here is that the solvers can
fail to converge to a well-conditioned estimate. The corresponding
values of alpha then come out as missing values, but the optimum may
be close to these missing values.

Computes the log-likelihood of a Gaussian data set with
self.covariance_ as an estimator of its covariance matrix.

Parameters:

X_test : array-like, shape = [n_samples, n_features]

Test data of which we compute the likelihood, where n_samples is
the number of samples and n_features is the number of features.
X_test is assumed to be drawn from the same distribution than
the data used in fit (including centering).

y : not used, present for API consistence purpose.

Returns:

res : float

The likelihood of the data set with self.covariance_ as an
estimator of its covariance matrix.

The method works on simple estimators as well as on nested objects
(such as pipelines). The former have parameters of the form
<component>__<parameter> so that it’s possible to update each
component of a nested object.