LONDON: If you love taking selfies, here comes something new to experiment with. Researchers at the University of Nottingham in Britain have developed a technology capable of producing 3D facial reconstruction from a single 2D image - the 3D selfie.

The new web app allows people to upload a single colour image and receive, in a few seconds, a 3D model showing the shape of their face.

Aside from the more standard applications, such as face and emotion recognition, this technology - scheduled to be presented at the International Conference on Computer Vision (ICCV) 2017 in Venice in October -- could be used to personalise computer games, improve augmented reality and let people try on online accessories such as glasses.

It could also have medical applications - such as simulating the results of plastic surgery or helping to understand medical conditions such as autism and depression.

The technique was developed using a Convolutional Neural Network (CNN) - an area of artificial intelligence (AI) which uses machine learning to give computers the ability to learn without being explicitly programmed.

The research team trained a CNN on a huge dataset of 2D pictures and 3D facial models. With all this information their CNN is able to reconstruct 3D facial geometry from a single 2D image. It can also take a good guess at the non-visible parts of the face.

"The main novelty is in the simplicity of our approach which bypasses the complex pipelines typically used by other techniques. We instead came up with the idea of training a big neural network on 80,000 faces to directly learn to output the 3D facial geometry from a single 2D image," said Georgios (Yorgos) Tzimiropoulos, Assistant Professor in the School of Computer Science.

The technique demonstrates some of the advances possible through deep learning - a form of machine learning that uses artificial neural networks to mimic the way the brain makes connections between pieces of information.