No need to shout

Scientists at the University of Illinois have signed a licensing agreement to commercialise an intelligent hearing aid that that is said to replicate the human brains ability to spatially separate sounds and process them.

Scientists at the University of Illinois recently signed a licensing agreement with Phonak Inc, a manufacturer of technologically advanced hearing aids, to commercialise an intelligent hearing aid system.

The new hearing-aid technology is said to spatially separate sounds and process them in a way much like the human brain. A key feature of the new system is its ability to integrate signals from each ear so that a listener can focus on a desired voice while cancelling out background noise.

‘Today’s state-of-the-art hearing aids can select a voice in a crowd by applying highly directive microphones,’ said Albert Feng, a UI professor of molecular and integrative physiology. ‘However, these devices cannot effectively differentiate between background noise and the desired conversation when the sources are in close proximity, causing confusion in noisy environments.’

The intelligent hearing aid prototype consists of a pair of miniature microphones, a processor, two earpieces and an amplifier. The hub of the system is a binaurally based Intelligent Auditory Processor, which is said to filter sounds and only transmit the desired voice to the amplifier. The processor works by comparing signals from the microphones and detecting subtle differences in their time of arrival.

‘Normal hearing exploits the fact that we have two ears,’ said Doug Jones, a UI professor of electrical and computer engineering. ‘Our brains utilise both the time of arrival and the intensity of impinging sound waves to perform spatial processing and filtering. This allows us to focus our attention in the direction of the desired sounds and ignore the rest.’

To perform this process artificially, the researchers developed an algorithm capable of extracting the desired speech signal in the presence of multiple interfering sounds. Using the algorithm, they built a prototype that was able to work in real time in noisy environments.

‘The algorithm mimics the biological system to perform auditory scene analysis and to reproduce how the brain selects and filters information,’ said Jones. ‘By pointing the microphones at the desired source, we can capture the intended signal and filter out all others.’

Phonak engineers and UI researchers are now working to package the prototype into a miniature, self-contained system.