This Demonstration considers one of the simplest nonparametric-regression problems: let be a smooth real-valued function over the interval ; recover (or estimate) when one only knows approximate values for that satisfy the model , where and the are independent, standard normal random variables. Sometimes one assumes that the noise level is known. Such a curve estimation problem is also called a signal denoising problem.

Perhaps the simplest "solution" to this problem is by the classical and widely used kernel-smoothing method, which is a particular case of the Loess method (locally weighted scatterplot smoothing); see the Details below.

In the kernel-smoothing method (like in any similar nonparametric method, e.g. the smoothing spline method) one or several smoothing parameters have to be appropriately chosen to obtain a "good" estimate of the true curve . For the kernel method, it is known that choosing a good value for the famous bandwidth parameter (see Details) is crucial, much more than the choice of the class of the kernels, which is fixed here.

The curve estimate corresponding to a given value for the bandwidth parameter is then denoted . Notice that depends on , , and the 's only through the data .

Three very popular methods are available for choosing : cross-validation (also called the "leave-one-out" principle; see PRESS statistic), generalized cross validation (GCV), and Mallows’ (also sometimes denoted by or UBR for unbiased risk estimate). In fact, cross-validation and GCV coincide in our context, where periodic end conditions are assumed. See [1] for a review; and see [2] for the definition and an analysis of . Notice that GCV (in contrast to the method) does not require that be known.

This Demonstration provides interactive assessments of the statistical efficiency of GCV and smoothing-parameter selectors; and this can be done for rather large (here, could be taken as large as with reasonably fast interactivity on a current personal computer).

Here six examples for (the "true curve") can be tried: is plotted (in blue) in the third of the three possible views using the tabs: 1. The data and the curve-estimate , where you choose by the trial-bandwidth slider, show that the choice of the bandwidth is crucial. 2. The two-curve estimates given by GCV and are very often so similar that they can not be distinguished. 3. The third tab displays quantities related to the "truth": the ASE-optimal choice yields the green curve, this is the -value that minimizes a global discrepancy between , and , defined by

,

where ASE stands for average of the squared errors. (Note that this ASE-optimal choice is a target that seems to be unattainable in practice, since we do not know the true curve .)

In the third view, the curve-estimate associated with the automatic choice is again plotted (purple curve): we can then assess the typical efficiency of (or the quite close GCV choice) by comparing this curve estimate with the targeted ASE-optimal curve.

This third view also displays the two associated ASE values, in addition to the one associated with the bandwidth chosen by-eye. This shows that it is difficult to choose by eye a trial-bandwidth in the first view (thus without the help of the true curve plot) that will turn to be as least as good as the GCV or choice, when "better" still means a lower ASE-value.

Staying in the third view and only increasing the size of the data set, one typically observes that the curve estimate and the ASE-optimal curve generally become very close, and the two associated ASE distances often become relatively similar. This agrees with the known asymptotic theory, see [3] and the references therein.

The rate at which the relative difference between the (or GCV) bandwidth and the optimal bandwidth converges to zero can be assessed in the additional panel, top right, in the third view, which shows the two kernels (precisely only the right-hand part of each of the two kernels, since they are even functions) associated with the two bandwidths: the differences between the two bandwidths can be scrutinized with a much better precision than by looking only at the two associated curves.

By varying the seed that generates the data, you can also observe that, in certain cases, there sometimes remains non-negligible differences between the (or GCV) choice and the optimal bandwidth.

THINGS TO TRY

SNAPSHOTS

DETAILS

The three snapshots above are respectively the three views of the results obtained when analysing a fixed data set of size 1024.

The kernel estimation method for nonparametric regression is a particular case of the Loess method mentioned in Ian McLeod's Demonstration, How Loess Works.

The case of local constant fit and a homogeneous (in space) amount of smoothing is considered here.

The kernels used here are from the class of biweight kernels that resembles the class (when varies) of the densities of the normal (Gaussian) distribution with mean and standard deviation , except that a biweight kernel has compact support, with the length of support being the kernel bandwidth, also denoted . Each candidate among the curve estimates needs to be evaluated only at the equispaced points , . The computation of such a candidate is simply and quickly provided by the built-in Mathematica function ListConvolve.

We restrict ourselves to sizes that are powers of : all the invoked discretized kernel functions can then be simply and quickly obtained by first precomputing the required kernel values for the case equal to its largest possible value and for bandwidths in a fixed fine grid of size 401 (in the Initialization step) and then by appropriately subsampling this precomputed Table.