FREQUENTLY ASKED QUESTIONS

3D avatar from a photograph -- how is that even possible?

Isn't that super cool? Our sauce uses AI trained on faces, image analysis and a deep understanding of representing facial motions for digital characters like Shrek and the Hulk. There is some patent-pending magic in there.

Is this similar to getting a 3D scan done?

Much more useful and powerful - we achieve similar visual fidelity as a scan, yet fully animatable in 3D. We also factor out the lighting from the photograph, which means that your 3D avatar can be immersed into new enviroments with different lighting. We are talking about bringing your avatar to life with a AAA-grade facial musculature rig that has been adapted to fit your face. This could lead to synthesizing new animation and dynamic interactions in 3D virtual worlds.

How can I make my 3D result look as good as the results on the page?

It's a really challenging problem and we are in beta with the technology. As VFX artists, we really care about the visual quality of your avatars. We are constantly improving the perceptual metrics that are used to automatically reconstruct your face from the photograph. For best results:

Tie longer hair back and remove any accessories, like hats and glasses.

Take photos indoors in an evenly lit space (no shadows or back lighting). Use a flash or diffuser if needed. Looks especially good when there are no shadows over your eyes.

Image resolution is not a huge issue as long as your face occupies a reasonable portion of the image. 1K selfie is plenty resolution.

How are we using gender and ethnicity data?

Gender and ethnicity classification helps resolve geometric ambiguities and ratio of facial features, which is useful in fitting the 3D model to photograph.

Tips for best quality viewing on a mobile phone

The textures on a phone default to low-resolution (LD) by default; if the gear icon on the bottom right indicates LD, switch it to HD under textures to see the actual textures.

Can I change my hairstyle and eye color?

Yes! But not exposed in the demo yet. Remember that your avatar is in 3D and is in the perfect format to start stylizing. This is just what we did in 3D animated movies with our characters where we would have a library of different hairstyles, glasses, accessories and these could be adapted to the unique face shape.

Can I personalize my avatar's smile with additional photographs?

You are a step ahead of us, but that's on our technology roadmap. This avatar is going to be your evolving visual identity and with more input we can start training your custom facial rig to learn your expressions.

How long does the processing take?

The processing all happens on the cloud through our API. It takes about a minute to build the avatar model and another minute to build the personalized facial rig with animation. We are using Sketchfab to view the results and it takes about 5 minutes to upload to their online site where we share it with you. So far we have been optimizing for functionality over speed.

When can I get myself and my friends into my favorite 3D game or VR experience?

We are actively looking to license our technology to clients in games, VR and messaging. Please reach out to us if you are interested in our API or have ideas on specific games or virtual environments that you'd love to see this tech deployed in. Contact us at: info@loomai.com

What about the body?

In this current demo we are using just one male and female body, but our plan is to use statistical methods to make the best guess for the body shape using some basic information from the user. We could also get the body shape from the specific game/VR experience.

How to view these results in VR?

This can be done using the Sketchfab feature - it is still experimental webVR technology, but you can get it running by following the instructions here (you will need a Chromium browser which supports webVR and then hit the VR icon on the right bottom of the sketchfab window):
https://sketchfab.com/virtual-realityhttps://webvr.info/get-chrome/