Software modelled on the human brain could soon bring a new level of realism to computer animation.
The programme under development by Cambridge-based firm Emotion AI will allow animators to work more like directors by adding to their creations the kind of detail that has previously been too time-consuming to achieve.
Instead of manually altering each image to show movement and expression, animators will give characters instructions and a form of artificial intelligence will automatically generate hundreds of changes to the image.
Existing methods of animation tend to move characters from one emotional state to another without showing the full detail of movement, according to Emotion AI’s founder and chief executive officer, Ian Wilson.
‘With your own face, your muscles are constantly moving all at the same time,’ he told The Engineer. ‘So the way expression works in our characters is not just going from zero to smile – all the muscles are continuously moving independent of each other.’
The programming, the product of 10 years’ work, is based on research into computational neuroscience, simulating the way the brain works to produce emotional responses.
‘Computers are basically binary – things are either zero or one, this or that,’ said Wilson. ‘But our behaviour as people is always shifting. There are a whole bunch of different states that we’re partly in all at the same time.’
Like the brain, Emotion AI’s program represents these emotional states with muscle movements in the face. ‘Our software makes it all work together so what you see is someone who is happy to a certain degree. Under the hood the programming is the same but philosophically it’s a very different way of thinking.’
Emotion AI has produced one customised project using the software and is now trialling a plug-in tool for existing animation program Autodesk Maya, in the hope that it could be available for download within six months.
Wilson added that the software wouldn’t remove the craft or skill from animating because it would allow designers to bring their creations to life in a more realistic way than ever before.
‘You still need to pull the levers or almost puppeteer the thing so there’s still the element of craftsmanship there – the model will do nothing by itself,’ he said.
‘With other tools it’s almost impossible to create any kind of nuance because it’s so difficult to produce just a basic wooden smile, that trying to produce anything that fluctuates 300 times every second is just too much work. What we’re doing is giving you that ability that you never had before.’
The software could also be used to generate more realistic background characters in computer games, or to create expressive avatars for social networking websites.