Stay Connected

Robotics professor Masahiro Mori's "uncanny valley" hypothesis, formulated in 1970, holds that the more human characteristics a robot is bestowed, the more likable it becomes to a human being. Think Pixar's (NYSE:DIS) WALL-E versus a power drill.

Once a robot possesses a strong resemblance to humans, however, but with some feature that isn't quite right, its likability plummets. The robot's appearance becomes discomforting and even revolting to a human observer. After this point, as the traits of the robot -- or digitized likeness of a person -- become more accurately human, likability again begins to increase, resulting in an upwards climb out of the uncanny valley. On the far side of the valley is photorealism, and at the far peak, an actual human.

(Zombies, with their strange gait and garbled grunts, are often considered to reside at the very bottom of the uncanny valley, which explains their effective role in horror movies.)
These days, the video game and animation industry is straddling both side of the valley. Many video game figures are stylized and therefore fall right before the valley's transition from cartoonish and enjoyable to overly familiar and disconcerting. Nintendo Co., Ltd.'s (OTCMKTS:NTDOY) Mario is a perfect example.

On the other hand, Nathan Drake, the protagonist of Naughty Dog's Uncharted series for Sony Corporation's (NYSE:SNE) consoles, has life-like facial features and motion. The game is narrated by voice actors. In the latest iteration of the series, Uncharted 3: Drake's Deception, which was released in North America in 2011, Drake looks and acts quite human. However, every in-game occurrence that doesn't seem quite right -- like a perfectly consistent stride or slightly unnatural reaction -- is noticeable.

Take a look at the trailer:

Crossing the valley is not easy. New technologies, however, continue to draw nearer the possibility of animating a near-flawless human figure, and digital characters have already been convincingly placed alongside real-life actors in movies like The Curious Case of Benjamin Button.

It's easy to see the incentive for the video game and movie industries to create more convincing human figures -- higher quality and more realistic graphics will always translate into more paying customers. Outside of the entertainment industry, there are also reasons to pursue the same metaphorical journey. Crossing the valley could open the door to a new, almost stereotypically futuristic realm of technology -- where a living human is perfectly replicated.

Right now, NVIDIA Corporation (NASDAQ:NVDA) is tech's leader in this space, with its Titan Graphics Processing Unit (GPU) handling 4.5 million teraflops, or 4.5 trillion floating-point operations per second. To make a little sense of that measurement, an entire PS4 from Sony can only handle 1.84 teraflops. Microsoft Corporation's (NASDAQ:MSFT) Xbox One isn't expected to approach Titan's capabilities, either.

For digital characters to look convincing, even the most minute details, from facial wrinkling to light reflecting off of skin, must be taken into account. To achieve this, Paul Debevec, a research professor at the University of Southern California's Institute for Creative Technologies, created a sphere that can direct light from almost all angles at a seated actor or actress and record their facial features, allowing him to capture tiny intricacies of human movement. In 2008, Debevec and Image Metrics, a high quality animation company, presented The Digital Emily Project:

Even with a result as convincing as Emily, Debevec knew that additional processing power could support an even more realistic model. He and NVIDIA partnered recently to display just how human an animated figure can become if it has the right GPU behind it, in a project they call Face Works.

Combining NVIDIA's Titan and Debevec's light stage, the partners were able to create Digital Ira, a seemingly human head that makes faces and converses almost seamlessly, which they presented at the 2013 Game Developers Conference in March.

Ira is impressive, but it still seems clear that the day when video game characters appear entirely human may not yet be upon us. The sheer number of possibilities, however, that something like NVIDIA's Face Works could provide must be accounted for.

Robotic figures of famous politicians and celebrities could be modeled by Debevec and powered by NVIDIA, allowing students to hear history from a computer generated model of the source, rather than a textbook or teacher. Video conferencing could be revolutionized to include a digital image of a speaker, one which could replicate the individual's facial expressions and gestures. Executives could present "in-person," making eye contact and connecting in a way that a traditional chat and camera don't allow.

While all of these innovations could be available in just a few years, it's important to remember that none of these graphic feats can occur without the processors behind them.

NVIDIA may have condensed the process to 4.5 terraflops, but with Intel Corporation (NASDAQ:INTC), Qualcomm, Inc. (NASDAQ:QCOM) and even more companies manufacturing GPUs, the race to the top of the uncanny valley will be a competitive one, with the winner reaping big rewards.