Thursday, 18 September 2014
masthead+quote+image
Advanced search

Facial analysis method heralds era of ultra-realistic animations

A new method of analysing faces on 3D video could help the film industry create more realistic animations without actors having to wear distracting markers.

Engineers at Bath University have developed a way of computer-modelling facial expressions — initially for psychology experiments — that could lead to identity-recognition systems or make it easier to animate people in films and computer games.

The ’dynamic facial capture’ technique uses computer algorithms to track individual pixels in each frame of video footage captured with a 3D depth camera by treating them as 2D images.

This allows the computer to build a model of the person’s moving face without the use of physical or painted markers, which are typically used in current facial-capture techniques in the film and computer game industries.

Dr Darren Cosker, who is leading the research funded by a Royal Academy of Engineering (RAE) Fellowship, said that with sufficiently advanced cameras, the model should be able to track individual skin pores on a person’s face.

‘The hardest challenge in computer graphics is creating characters that are indistinguishable from humans,’ said Cosker, speaking at the annual RAE Engineering Research Forum last week.

‘There are three aspects to this problem. You want to be able to create a character that looks like a photograph. As soon as that character’s face starts to move, you want the dynamics of the face to be right, which is a lot harder.

‘And the third problem is we want the perception of the face to be right, so when a character is smiling, we want to feel that is a genuine smile. [Currently] with human characters that illusion of realism starts to break down.’

Expressions

Given that the program is able to track very subtle facial movements, the researchers have also used it to develop a way to identify individuals based on their expressions and tell if their expressions are genuine, which could be used by security systems.

The computer model treats the 3D image of the face as a 2D one by projecting it onto a cylinder and then ‘unwrapping’ it, in a similar way to how world maps are drawn.

Different pixels correspond to vertices on a triangular mesh placed across the face in each frame. Following the pixels allows the computer to model how the mesh moves and identify facial expressions based on a method used by psychologists called the Facial Action Coding System (FACS).

Engineers at Disney Research in Zurich have developed a similar system using high-resolution cameras that produce more detail with very large amounts of data. Cosker hopes that when his system is scaled up to use better cameras, the data will be more manageable.

Realism

Other commercial systems that don’t require markers or painted patterns do exist but currently don’t provide as much realism because an animator is still required to fill in much of the detail.

Oliver James, chief scientist at visual effects company Double Negative, said better facial-capture techniques would give animators a better set of tools to use but wouldn’t replace them.

‘Where we use motion capture it’s the first part of the animation,’ he said. ‘Shots that would have taken a week can now take half a day. It can get animators closer to the final result and then let them spend their effort on making it that bit more special.’

A major problem in facial capture still to be solved is the use of cameras outside of a controlled studio environment, where lighting can change from shot to shot. Cosker said he thought new cameras were needed to address this issue.

‘I think we rely too much sometimes on just trying to make an algorithm to do something but a lot of the time the hardware gets good enough. I have an idea that involves building a special camera which is lots of cameras put together.’


Have your say

Mandatory
Mandatory
Mandatory
Mandatory

My saved stories (Empty)

You have no saved stories

Save this article