Abstract

We introduce an image-based method for modeling a specific subject's hair. The principle of the approach is to study the variations of hair illumination under controlled illumination. The use of a stationary viewpoint and the assumption that the subject is still allows us to work with perfectly registered images: all pixels in an image sequence represent the same portion of the hair, and the particular illumination profile observed at each pixel can be used to infer the missing degree of directional information. This is accomplished by synthesizing reflection profiles using a hair reflectance model, for a number of candidate directions at each pixel, and choosing the orientation that provides the best profile match. Our results demonstrate the potential of this approach, by effectively reconstructing accurate hair strands that are well highlighted by a particular light source movement.