Q. Wu, J. Guinney, M. Maggioni, S. Mukherjee

Duke University

July 2007 updated June 2008

This paper develops and discusses a modeling framework called
learning gradients that allows for predictive models that
simultaneously infer the geometry and statistical dependencies
of the input space relevant for prediction. The geometric relations
addressed in this paper hold for Euclidean spaces as well as the
manifold setting. The central quantity in this framework is an
estimate of the gradient of a regression or classification function,
which is computed by a discriminative approach. We relate the
gradient to the problem of inverse regression which in the machine
learning community is typically addressed by generative models.
A result of this relation is a simple and precise comparison of
a variety of simultaneous regression and dimensionality reduction
methods from the statistics literature. The gradient estimate is
applied to a variety of problems central to machine learning:
variable selection, linear and nonlinear dimension reduction,
and the inference of a graphical model of the dependencies
of the input variables that are relevant to prediction.