Subsampling for Ridge Regression via Regularized Volume Sampling

Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics, PMLR 84:716-725, 2018.

Abstract

Given n vectors $x_i ∈R^d$, we want to fit a linear regression model for noisy labels $y_i ∈\mathbb{R}$. The ridge estimator is a classical solution to this problem. However, when labels are expensive, we are forced to select only a small subset of vectors $x_i$ for which we obtain the labels $y_i$. We propose a new procedure for selecting the subset of vectors, such that the ridge estimator obtained from that subset offers strong statistical guarantees in terms of the mean squared prediction error over the entire dataset of n labeled vectors. The number of labels needed is proportional to the statistical dimension of the problem which is often much smaller than d. Our method is an extension of a joint subsampling procedure called volume sampling. A second major contribution is that we speed up volume sampling so that it is essentially as efficient as leverage scores, which is the main i.i.d. subsampling procedure for this task. Finally, we show theoretically and experimentally that volume sampling has a clear advantage over any i.i.d. sampling when labels are expensive.

Related Material

@InProceedings{pmlr-v84-derezinski18a,
title = {Subsampling for Ridge Regression via Regularized Volume Sampling},
author = {Michal Derezinski and Manfred Warmuth},
booktitle = {Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics},
pages = {716--725},
year = {2018},
editor = {Amos Storkey and Fernando Perez-Cruz},
volume = {84},
series = {Proceedings of Machine Learning Research},
address = {Playa Blanca, Lanzarote, Canary Islands},
month = {09--11 Apr},
publisher = {PMLR},
pdf = {http://proceedings.mlr.press/v84/derezinski18a/derezinski18a.pdf},
url = {http://proceedings.mlr.press/v84/derezinski18a.html},
abstract = {Given n vectors $x_i ∈R^d$, we want to fit a linear regression model for noisy labels $y_i ∈\mathbb{R}$. The ridge estimator is a classical solution to this problem. However, when labels are expensive, we are forced to select only a small subset of vectors $x_i$ for which we obtain the labels $y_i$. We propose a new procedure for selecting the subset of vectors, such that the ridge estimator obtained from that subset offers strong statistical guarantees in terms of the mean squared prediction error over the entire dataset of n labeled vectors. The number of labels needed is proportional to the statistical dimension of the problem which is often much smaller than d. Our method is an extension of a joint subsampling procedure called volume sampling. A second major contribution is that we speed up volume sampling so that it is essentially as efficient as leverage scores, which is the main i.i.d. subsampling procedure for this task. Finally, we show theoretically and experimentally that volume sampling has a clear advantage over any i.i.d. sampling when labels are expensive.}
}

%0 Conference Paper
%T Subsampling for Ridge Regression via Regularized Volume Sampling
%A Michal Derezinski
%A Manfred Warmuth
%B Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics
%C Proceedings of Machine Learning Research
%D 2018
%E Amos Storkey
%E Fernando Perez-Cruz
%F pmlr-v84-derezinski18a
%I PMLR
%J Proceedings of Machine Learning Research
%P 716--725
%U http://proceedings.mlr.press
%V 84
%W PMLR
%X Given n vectors $x_i ∈R^d$, we want to fit a linear regression model for noisy labels $y_i ∈\mathbb{R}$. The ridge estimator is a classical solution to this problem. However, when labels are expensive, we are forced to select only a small subset of vectors $x_i$ for which we obtain the labels $y_i$. We propose a new procedure for selecting the subset of vectors, such that the ridge estimator obtained from that subset offers strong statistical guarantees in terms of the mean squared prediction error over the entire dataset of n labeled vectors. The number of labels needed is proportional to the statistical dimension of the problem which is often much smaller than d. Our method is an extension of a joint subsampling procedure called volume sampling. A second major contribution is that we speed up volume sampling so that it is essentially as efficient as leverage scores, which is the main i.i.d. subsampling procedure for this task. Finally, we show theoretically and experimentally that volume sampling has a clear advantage over any i.i.d. sampling when labels are expensive.