Their web application allows users to upload a single colour image, then transforms it into a 3D image that shows the physical shape of the face. It works using a Convolutional Neural Network (CNN) - artificial intelligence (AI) that applies machine learning which has been trained on a huge dataset of 2D pictures and 3D facial models. As well as being able to reconstruct 3D facial geometry, the CNN can also make predictions regarding the non-visible parts of the face.
"Our CNN uses just a single 2D facial image, and works for arbitrary facial poses [front or profile images] and facial expressions [smiling]," said Nottingham PhD student Aaron Jackson, the paper’s lead author.
According to the team, current techniques to create a 3D representation require multiple facial images, and face several challenges such as dense correspondences across large facial poses, expressions and non-uniform illumination. By applying neural networks, the Nottingham researchers believe they have found a more straightforward solution to these complex rendering problems.
"The main novelty is in the simplicity of our approach which bypasses the complex pipelines typically used by other techniques,” said research supervisor Dr Yorgos Tzimiropoulos. “We instead came up with the idea of training a big neural network on 80,000 faces to directly learn to output the 3D facial geometry from a single 2D image."
As well as facial and emotional recognition applications, the 3D selfie software could be used to simulate the results of plastic surgery, or assist medical professionals in understanding conditions like autism. The technology could also help improve augmented and virtual reality and has potential for character personalisation within computer games.
The team’s research paper, 'Large Pose 3D Face Reconstruction from a Single Image via Direct Volumetric CNN Regression', will be presented next month at the International Conference on Computer Vision (ICCV) in Venice.