Body Labs wants to create your virtual doppelganger, and bring it to life

“We can recreate any specific human being in a digital format, as a 3D avatar, and we can move them through a full range of motions the way humans move in real life.”

Imagine being able to scan your own body and play as a realistic avatar.

Bill O’Farrell is the co-founder and CEO of Body Labs, and he wants to transform regular people into digital avatars. The New York startup just raised $2.2 million in seed funding as it looks to expand its development team.

“We’ve got the technology to enable the human body to be transformed to a digital platform for designing, manufacturing, buying, selling, and recommending goods and services.”

By using a person’s specific shape and understanding how they move, O’Farrell thinks the business world can design and manufacture different services that are tailored to each person. The potential applications are seemingly endless, from car-seat design to apparel, medicine, video games and virtual reality.

Mapping the human body

“Our big vision is to fundamentally change the interface between businesses and consumers,” Bill enthuses, “To have the body as a digital platform is a conversation that hasn’t been had before because it really hasn’t been possible until now.”

O’Farrell is a serial entrepreneur. In the nineties he served as CEO of After Effects, the motion graphics and video compositing software, acquired by Adobe in 1994. He later founded SpeechWorks, a provider of telephony-based speech recognition systems acquired by Nuance in 2003.

The Body Labs story started in 2010 with O’Farrell and another co-founder, Professor Michael Black of Brown University. But it wasn’t until a couple of years later, after Black served as Director of the Max Planck Institute for Intelligent Systems in Germany, that the technology had matured enough for them to press forward. They founded Body Labs in March 2013.

“The aim is to commercialize a very sophisticated statistical understanding of the human body,” explains Bill. “The shape, the motion, the poses, the size, based on thousands and thousands of scans of different shapes of people — all that data has been used to train a computer vision model of the human body.”

Body Labs claims to have the largest database in the world of body shapes and poses covering almost the full spectrum of possible human shapes. It was compiled from a mixture of publicly available databases, and extensive scanning and measuring conducted at research institutions.

Scanning yourself

“We want people to scan themselves. We want them to create models. We want everyone to have their digital avatar as part of their online ID.”

“Most traction initially has been in the apparel industry, on the design side.”

You can actually try this out for yourself right now if you have a Microsoft Kinect and download the free Body Snap application (there are a few other requirements you’ll find explained at the website). The application takes four shots of your static body and two shots of your face, which are then uploaded to the Body Hub. Your avatar takes 10 or 15 minutes to render, depending on the quality of the scan, but the 3D model that comes out is an accurate virtual representation of your body that you can download and use.

Currently you can use the model in Mixamo’s Fuse app where you can change the clothing, tweak the hairstyle, and even animate it.

Although the Microsoft Kinect is used for scanning people, Body Labs is agnostic about the technology. This works with any reasonable quality scanner, the Kinect is just a convenient and relatively cheap example.

With Google’s Project Tango, Apple’s PrimeSense acquisition, and Amazon’s Fire Phone, Bill feels that the technology will soon be ubiquitous. If that doesn’t happen, Body Labs already has an idea about how to extrapolate the necessary data from a regular smartphone camera.

The business model

Creating one or two models will remain completely free for regular folks, and Body Labs might charge a business $250 for a model, but the real revenue is the licensing potential with companies looking to create thousands of scans or integrate the technology into their own software.

So, where has the interest in Body Labs technology been so far?

Bill mentions the U.S. Army, and some of North America’s biggest sports and consumer apparel companies, but they’re not ready to announce any partnerships just yet. Still, there are a lot of places where these avatars could be useful.

Faster fashion

“Most traction initially has been in the apparel industry, on the design side,” explains Bill. “We can take a fit (fitting) model, put them into a CAD (computer-aided design) program and get 80 percent of the initial design, fit, and appearance done virtually, which saves time and money so you can get into stores quicker.”

The traditional process would be to draw a pattern, cut it out in paper, cut the cloth, sew the cloth, put it on the fitting model, and see how it looks. Then the designer has to make changes to the pattern and run through the whole process again. With an accurate, posable 3D representation of the fitting model, designers can test new designs virtually before pulling the trigger on physical samples.

“We can apply mocap (motion capture), see how they move, dress them in a CAD program, put the fit model through a range of poses, see how the garment behaves when they’re sitting, jumping, taking off the jacket.”

Body Labs can also provide sizing studies, or body demographics. They might send scans of 20 or 30 target customers from five or six target cities. The fashion designer can see the breadth of sizes they need to design for. While some high-end boutique brands might design clothes for 25-year-olds, their typical customer might actually be a wealthy suburbanite in the 45- to 50-year-old range. This data helps bridge the gap.

Animation, video games, and virtual reality

The appeal of a system like this is obvious for video games and the myriad of virtual-reality worlds on the horizon. Imagine being able to scan your own body and play as a realistic avatar, or see yourself as part of an animated movie. Body Labs has already been talking to some game developers and big special-effects houses. But, there are a couple of obstacles in this space:

1. Traditional 3D animation is skeleton based

A modeler might create a 3D model, but it is rigged to a skeleton which the animator manipulates. There are advantages to this system in something like video game production because animations can be applied to a diverse set of models, but it also causes all sorts of accuracy issues. The same skeleton and animation with a fat body type or a slim body type will look very different. Body Labs approach gets things like the soft tissue deformation (jiggling to me and you) exactly right, but it will require a fundamental shift in the way animation is done from skeletons to shapes.

2. It doesn’t support real-time rendering

The ability to accurately render your own body in real-time inside a virtual reality environment is desirable, but it takes a great deal of processing power right now and a little time to get results. It’s a definite possibility for the future, but it will require further R&D and Bill says it’s not high on the development list yet.

These aren’t insurmountable problems, but we might have to wait a while longer to scan ourselves into the latest games or virtual worlds.

Realizing the potential

Body Labs can show you how a body will fit into a space, how it works in the full range of human poses, and even how it might change over time. It’s already looking at the effects of pregnancy and dieting.

“We understand that for this to really take root we need other companies to come and access our capabilities,” Bill acknowledges. “There are a lot of ways of solving these problems, but we think none of them are as optimal as ours. The reality is that almost everything humans have made in the history of the world in some way, shape or form relates to our bodies. We do bodies highly accurately and we do bodies in motion highly accurately. You need a body, you come to us.”