Ravi Ramamoorthi is the Ronald L. Graham professor of Computer Science
at the University of California, San Diego, and founding Director of the
UC San Diego Center for Visual Computing. Prof. Ramamoorthi is an
author of more than 100 refereed publications in computer graphics and
computer vision, including more than 50 at ACM SIGGRAPH/TOG, and has
played a key role in building multi-faculty research groups that have
been recognized as leaders in computer graphics and computer vision at
Columbia, Berkeley and UCSD. His research has been recognized with a
half-dozen early career awards, including the ACM SIGGRAPH Significant
New Researcher Award in computer graphics in 2007, and the Presidential
Early Career Award for Scientists and Engineers (PECASE) for his work in
physics-based computer vision in 2008. Prof. Ramamoorthi's work has had
substantial impact in industry, with techniques like spherical harmonic
lighting being adopted in industry-standard RenderMan software, and
widely used in interactive applications and movie productions. He has
graduated more than 20 postdoctoral, Ph.D. and M.S. students, many of
whom have taken positions at leading universities or research labs.
He has also taught the first open online course in computer graphics as
one of the first nine classes on the EdX platform, with more than
100,000 registrations to date and a Chinese translation available via
XuetangX. He was named a finalist for the inaugural edX Prize for
exceptional contributions in online teaching and learning.

Many problems in computer graphics and computer vision involve
high-dimensional 3D-8D visual datasets. Real-time image synthesis with
changing lighting and view is often accomplished by pre-computing the
6D light transport function (2 dimensions each for spatial position,
incident lighting and viewing direction). Realistic image synthesis also
often involves acquisition of appearance data from real-world objects;
a BRDF (Bi-Directional Reflection Distribution Function) that measures
the scattering of light at a single surface location is 4D and spatial
variation and subsurface scattering involve 6D-8D functions. In
computer vision, problems like lighting insensitive facial recognition
similarly involve understanding the space of appearance variation
across lighting and view. Since hundreds of samples may be required in
each dimension, and the total size is exponential in the dimensionality,
brute force acquisition or precomputation is often not even feasible.

In this talk, we describe a signal-processing approach that exploits
the coherence, sparsity and inherent low-dimensionality of the visual
data, to derive novel efficient sampling and reconstruction algorithms.
We describe a variety of new computational methods and applications,
from affine wavelet transforms for real-time rendering with area
lights, to space-time and space-angle frequency analysis for motion
blur and global illumination, to compressive light transport
acquisition. We also discuss many new results in BRDF acquisition,
animation and appearance modeling. The results point toward a unified
sampling theory applicable to many areas of signal processing, computer
graphics and computer vision.