February 12, 2015

1University of California at Berkeley 2Universidad de Zaragoza 3Vanderbilt University

The Visual Computer

Abstract

We present a new depth from defocus method based on the assumption that a per pixel blur estimate (related with the circle of confusion), while ambiguous for a single image, behaves in a consistent way when applied over a focal stack of two or more images. This allows us to fit a simple analytical description of the circle of confusion to the different per pixel measures to obtain approximate depth values up to a scale. Our results are comparable to previous work while offering a faster and flexible pipeline.

Recently, we have seen a growing trend in the design and fabrication of personalized figurines, created by scanning real people and then physically reproducing miniature statues with 3D printers. This is currently a hot topic both in academia and industry, and the printed figurines are gaining more and more realism, especially with state-of-the-art facial scanning technology improving. However, current systems all contain the same limitation – no previous method is able to suitably capture personalized hair-styles for physical reproduction. Typically, the subject’s hair is approximated very coarsely or replaced completely with a template model.

In this paper we present the first method for stylized hair capture, a technique to reconstruct an individual’s actual hair-style in a manner suitable for physical reproduction. Inspired by centuries-old artistic sculptures, our method generates hair as a closed-manifold surface, yet contains the structural and color elements stylized in a way that captures the defining characteristics of the hair-style. The key to our approach is a novel multi-view stylization algorithm, which extends feature-preserving color filtering from 2D images to irregular manifolds in 3D, and introduces abstract geometric details that are coherent with the color stylization. The proposed technique fits naturally in traditional pipelines for figurine reproduction, and we demonstrate the robustness and versatility of our approach by capturing several subjects with widely varying hair-styles.

We present a novel computational framework for physically and chemically-based simulations of analog alternative photographic processes. In the real world, these processes allow the creation of very personal and unique depictions due to the combination of the chemicals used, the physical interaction with liquid solutions, and the individual craftsmanship of the artist. Our work focuses not only on achieving similar compelling results, but on the manual process as well, introducing a novel exploratory approach for interactive digital image creation and manipulation. With such an emphasis on the user interaction, our simulations are devised to run on tablet devices; thus we propose the combination of a lightweight data-driven model to simulate the chemical reactions involved, with efficient fluids simulations that modulate them. This combination allows realistic gestures-based user interaction with constant visual feedback in real-time. Using the proposed framework, we have built two prototypes with different tradeoffs between realism and flexibility, showing its potential to build novel image editing tools.