Microsoft Research Helps You Digitally Alter Your Hair In Photos

MSR is continuing its research in creating the perfect photographic, this time focusing on having the perfect hair. Lvdi Wang has been working for the past year and a half on improving the appearance of hair in digital images. Wang and his colleagues will be demonstrating their latest progress by delivering a paper called Dynamic Hair Manipulation in Images and Videos, co-authored by Zhejiang University’s Menglei Chai, Yanlin Weng, Xiaogang Jin, and Kun Zhou, along with Wang. The paper outlines a new, single-view hair-modeling technique for generating visually and physically plausible 3-D hair models achieved with only modest user interaction. The work creates hair models that visually match an original input image.

“We proposed a new method for creating a 3-D hair model from just one single photograph or short video,” Wang explains. “Such a model contains tens of thousands of individual hair strands and allows the user to manipulate hair in images or videos in a structure-preserving and semantically meaningful way. To get the correct hair-editing results,” Wang says, “we must make sure the 3-D hair strands are indeed grown from the scalp of a 3-D hair model, so that when the user moves the head or combs the hair, the hair roots are always fixed on the scalp. “This is the key to making ‘dynamic’ hair manipulation—changing the shapes of individual strands—possible. It also is one of the main technical challenges we have tackled. We are excited about the potential of our techniques to directly benefit a wide range of users,” Wang concludes. “This is due to the fact that, compared with traditional multiple-image-based solutions, our method dramatically reduces the requirements for the capture device, whether it is a hardware setup in a lab or a built-in camera in a user’s smartphone.”

The user provides a few strokes atop in the original portrait, and the technology delivers a high-quality model possessing both visual fidelity and physical plausibility to enable effects of alternative combing strategies or motion-preserving hair replacement in video. Alternatively, a couple of deft strokes on the original results in a virtual haircut. Wang and colleagues also have extended their model to address simple video input and generate dynamic 3-D hair models, enabling users to manipulate hair in a video or to transfer styles from images to videos. In the real world, even a slight alteration to a person’s hair can expose new strands of hair, while others can become blocked from sight. A new image of the same person thus would not include a direct correspondence, at the pixel level, to the original. In the latest paper, the researchers apply the principal of “physical plausibility,” in which hair roots are fixed in the scalp of the person in an image, it remains smooth instead of exhibiting sharp bends, and the length and continuity of real strands of hair are preserved to the extent possible.

Imagine an app where you select a photo where your hair is less than perfect, and draw a few “strands” around problem areas. Using those strands, the technology will fill out those strands into realistic-looking hair.