I am a Cheung Kong Professor in the Computer Science Department of Zhejiang
University, the Director of the State Key Lab of CAD&CG, and the head of the
Graphics and Parallel Systems Lab. I received my BS degree and PhD degree
in computer science, both from
Zhejiang University.
After graduation I spent six years with
Microsoft Research Asia,
and was a lead researcher of the graphics group before moving back to
Zhejiang University. I was named one of the world's top 35 young innovators by
MIT Technology Review in 2011, and was elected as an IEEE Fellow in 2015.

Research

My research interests are in computer graphics, computer vision, parallel computing and human computer interaction. I have conducted a wide range of research on
shape modeling/editing, texture mapping/synthesis, real-time rendering, GPU parallel computing, and
more recently real-time face tracking and digital avatars. These lines of research have led
to 80+ publications (including 40+ SIGGRAPH/TOG papers) and 30+ granted US patents over the past few years. Some of the technologies have been
integrated into the D3DX library of Microsoft DirectX, licensed to Weta
Digital, and adopted by Bungie
Studio (I received credit for my light map generation and
compression work on Halo 3).

Real-Time Face Tracking & Digital Avatar

FaceWarehouse: a
database of 3D facial expressions for visual computing applications. It contains 150 persons (aged 7-80 from various ethnic backgrounds) with 47 different facial expressions for each person. It is free for research purpose.

Generic face tracking: a calibration-free approach to real-time facial tracking
and animation with a single video camera. It learns a generic
regressor from public image datasets, which can be applied to any
user and arbitrary video cameras to infer accurate 2D facial landmarks
as well as the 3D facial shape from 2D video frames.

Real-time hair simulation: a data-driven approach that learns a reduced model to
optimally represent hair motion characteristics with a small number
of guide hairs. It enables a simulation of a full head of hairs with over 150K
strands in realtime.

Single-view hair modeling: a semi-automatic hair modeling technique that generates a plausible high-resolution strand-based 3D hair model from a single image. It enables a number of interesting applications that were previously challenging, including hairstyle transfer, portrait popups, image-space hair editing as well as video hair editing.

GPU Parallel Computing

SPAP (Same Program for All Processors): a
new programming language for heterogeneous many-core systems. It
allows the same program to work efficiently on all
processors of a heterogeneous system and fully utilize the
heterogeneous processing power by automatically
distributing computations among different processors. The
language currently supports x86 CPUs and CUDA GPUs.

RenderAnts: a Reyes rendering
system that runs entirely on GPUs. The system takes RenderMan
scenes and shaders as input, generates photorealistic images,
and is over one order of magnitude faster than existing
CPU-based renderers. RenderAnts source code is free to the
research community (source
code & test scenes). A micropolygon ray tracing algorithm
was recently added to the system to efficiently render defocus, motion blur,
and secondary ray effects.

GPGPU debugger: a new framework for debugging
GPU stream programs, which is based on GPU Interrupt, a new
mechanism that allows calling CPU functions from GPU code. The
debugging functions are exposed in
BSGP.