Analysis of human expressions could teach androids to smile

New research led by Osaka University, Japan, has examined the mechanisms of human facial expressions to understand how robots can more effectively convey and recognise our emotions.

AdobeStock

A robot’s ability to understand and display human emotion has long been a trope of science fiction stories, but new multi-institutional research has begun mapping the intricacies of human facial movements to bring this idea closer to reality.

Researchers used 125 tracking markers attached to a person’s face to closely examine 44 different singular facial actions, from blinking to raising the corner of the mouth.

Information gathered by this study could help researchers develop and improve artificial faces, both digitally on screens and, ultimately, physically for the faces of android robots. The study aimed to understand tensions and compressions in human facial structure, with the hope to allow these artificial expressions to appear more accurate and natural.

The researchers said this work could have applications beyond robotics, for improved facial recognition or medical diagnoses, for example, the latter of which currently relies on a doctor’s intuition to notice abnormalities in facial movement.

In a statement, Hisashi Ishihara, lead author of the study, said: “Our faces are so familiar to us that we don’t notice the fine details, but from an engineering perspective, they are amazing information display devices. By looking at people's facial expressions, we can tell when a smile is hiding sadness, or whether someone’s feeling tired or nervous.”

The research stated that every facial expression comes with a variety of local deformation, as muscles stretch and compress the skin, so even the simplest motions can be surprisingly complex.

Our faces contain a collection of different tissues below the skin, from muscle fibres to fatty adipose, all working simultaneously to convey how we are feeling. This intricate system is what makes facial expressions so subtle and nuanced, in turn making them challenging to replicate artificially.

According to the research team, current replication has relied on ‘much simpler’ measurements of the overall face shape and motion of points chosen on skin before and after movements.

“The facial structure beneath our skin is complex,” said Akihiro Nakatani, senior author. “The deformation analysis in this study could explain how sophisticated expressions, which comprise both stretched and compressed skin, can result from deceivingly simple facial actions.”

The study has only examined the face of one person so far, but researchers hope to increase this to gain a fuller understanding of human facial motions.

The report, ‘Visualization and analysis of skin strain distribution in various human facial actions’, published in the Mechanical Engineering Journal, can be read in full here.