Emotional-speech analysis

Researchers at Dublin Institute of Technology’s Digital Media Centre are developing technology that recognises emotion conveyed in a person’s speech.

Researchers at Dublin Institute of Technology’s Digital Media Centre (DMC) are developing technology that recognises emotion conveyed in a person’s speech.

The DMC’s Emovere project aims to identify the most appropriate acoustic characteristics to use in emotional-speech analysis and investigate advanced machine learning techniques for the recognition of emotion in speech.

It is anticipated the technology could be used in applications as diverse as animation and automated telephone helplines.

Researchers Sarah Jane Delany and Charlie Cullen will spend four years processing recordings from volunteers who will verbally express emotions such as anger, sadness and elation.

Cullen said they will focus on the shape and rhythm of the recordings’ acoustic signals through techniques inspired by music.

Their methods differ from other groups that have studied emotional speech recognition, he said. While others considered the shapes of signals, no one has yet developed a way to classify them.

‘We take an approach similar to what is done in music,’ he said. ‘We only take four points: the first, last, highest and lowest. That gives us the discreet shape for the speech track. We then compare several hundred of these shapes so we can delineate between different emotions.’

Register now to continue reading

Thanks for visiting The Engineer. You’ve now reached your monthly limit of news stories. Register for free to unlock unlimited access to all of our news coverage, as well as premium content including opinion, in-depth features and special reports.  

Benefits of registering

  • In-depth insights and coverage of key emerging trends

  • Unrestricted access to special reports throughout the year

  • Daily technology news delivered straight to your inbox