This thesis addresses the problem of capturing and reproducing surface material appearance from real-world examples for use in computer graphics applications. Detailed variation of color, shininess and small-scale shape is a critically important factor in visual plausibility of objects in synthetic images. Capturing these properties relies on measuring reflected light under various viewing and illumination conditions. Existing methods typically employ either complex mechanical devices, or heuristics that sacrifice fidelity for simplicity. Consequently, computer graphics practitioners continue to use manual authoring tools.
The thesis introduces three methods for capturing visually rich surface appearance descriptors using simple hardware setups and relatively little measurement data. The specific focus is on capturing detailed spatial variation of the reflectance properties, as opposed to angular variation, which is the primary focus of most previous work. We apply tools from modern data science — in particular, principled optimization-based approaches — to disentangle and explain the various reflectance effects in the scarce measurement data.
The first method uses a flat panel monitor as a programmable light source, and an SLR camera to observe reflections off the captured surface. The monitor is used to emit Fourier basis function patterns, which are well suited for isolating the reflectance properties of interest, and also exhibit a rich set of mathematical properties that enable computationally efficient interpretation of the data. The other two methods rely on the observation that the spatial variation of many real-world materials is stationary, in the sense that it consists of small elements repeating across the surface. By taking advantage of this redundancy, the methods demonstrate high-quality appearance capture from two photographs, and only a single photograph, respectively. The photographs are acquired using a mobile phone camera.
The resulting reflectance descriptors faithfully reproduce the appearance of the surface under novel viewing and illumination conditions. We demonstrate state of the art results among approaches with similar hardware complexity. The descriptors captured by the methods are directly usable in computer graphics applications, including games, film, and virtual and augmented reality.