Being yourself in VR

If we were to build a people-first platform, we were going to focus our efforts on understanding what makes you feel truly present with someone else in VR. Very early on, we decided to build an experience for the Oculus Touch controllers, which were providing the highest fidelity VR experience. We were particularly interested in exploring what hands and hand gestures could provide in a social context. Our early research demonstrated that people were able to recognize their significant other from other people in VR just by recognizing their hand motion.

The first challenge was to figure out how to represent people in VR. While the first instinct is to try to represent people exactly like they are in real life, in very high fidelity, we had to find a way to avoid falling into the uncanny valley.

We played around with different types of avatar systems and different visual treatments. A lot of explorations that seemed interesting in theory turned out pretty disturbing or disappointing in VR.

One of our first explorations that ended up being pretty successful and delightful was what we called the Rabbitars, some very cute and expressive rabbits with flopping ears. With the Rabbitars, we learned the importance of a torso and arms in making you feel like the avatar next to you was present in the space. Since only the head and hands were tracked by the hardware, we used a technique called inverse kinematics to predict where the arms and torsos would be. While this technique doesn’t work really well when applied to your own body, we found that it works surprisingly well when looking at other people in a social context. Another really positive aspect of those avatars was how expressive their flat textured eyes were.

On the other hand, while very charming, those rabbits were far from ideal when having a conversation with friends. The moment you’d be with multiple people in a space, you were constantly looking around to figure out who was talking and it was hard to remember who was who. You were constantly reminded that you were not really looking at your friends but just at an abstract representation of them.

A few of our many avatar explorations

With every iteration, we learned new things that informed the next one. Here are some of the things we learned along the way:

To achieve the feeling of being present with your friends in VR, you need to be able to bring some elements of your real-world identity.

Attaching body and arms to head and hands makes your avatar feel more realistic and give it presence and volume in the space. Even if the current generation of VR headsets don’t track the body and arms, generating them based off the hand and head data goes a long way.

Adding motion to certain elements (hair, for example) when a person moves gives the avatar more life. For example, your glasses can move up and down on your nose when you shake your head.

For eyes, 3D eyeballs are a lot of work, and they are not as charming as we thought they were. We found that 2D eyeballs were easier to design for emotes (e.g. laugh) and more inviting/less intimidating.

When starring at an object or person, our eyes rarely stay still and wide open for more than a few seconds. Adding a fake “look away” intermission and some fake blinking helps make eyes feel alive.

In the end, we landed on very stylized avatars that act as a cartoon representation of yourself. Charming and inviting, those work really well within today’s constraints of VR systems and allow expressiveness and self representation without falling into the uncanny valley. This style also means the avatars are quite flattering and tend to be more of an aspirational version of who you are.

The evolution of our avatars between Oculus Connect 3 in October 2016 and F8 in April 2017

Creating your avatar

Alongside figuring out what our avatars would look like, we had to design how people would create their own. Immediately, our instinct was to look for ideas that would prevent people from having to do a step by step, pick a facial feature after the other, type of interaction. Too many games and experiences force you to spend 20 minutes painfully creating an avatar you’re not sure yet how you’re going to be using. And with VR, there’s the extra difficulty that you need to remember what you look like in real life, as you have a headset on your face.

Thankfully, the advancements in AI over the last few years made it possible for us to create an experience in which we can automatically create an avatar for you out of one of your Facebook photos. This makes the process of creating your avatar blazing fast and fun.

Example of an avatar generated out of a profile photo

The moment we started testing our algorithm, we realized it was going to play a big role in the success of our product. The experience of creating your virtual self suddenly felt simple, fun and magic instead of clunky and time consuming.

Because so much of our understanding of what we look like and how others perceive us comes from us looking at ourselves in the mirror, we designed this experience around a virtual mirror in which people can discover their new virtual face and customize it. The floating photos around the mirror serve as a reference point as people tweak their avatar, but can also be picked up to generate an avatar automatically out of the selected photo.

To edit, people can select a facial feature they’d like to change by simply tapping on it on their reflection, and a set of alternative options appear on the mirror for them to choose from.

For choosing special elements like clothing and glasses, we came up with the concept of a virtual magazine in which people can pick something they’d like to wear and it automatically applies to their avatar.

The simplicity and fun of this experience combined with the expressiveness and friendly look of the avatars made our avatar creation experience something that feels fun and magical.