Crowdfunding campaign launched to manufacture assistive listening technology

AudioTelligence is looking to raise £400,000 via crowdfunding to manufacture an initial batch of 1,000 devices that use blind source separation and noise suppression technologies to alleviate hearing difficulties.

Orsana And earbuds
Orsana And earbuds - AudioTelligence

The Cambridge company’s assistive listening technology - Orsana - overcomes the so-called 'cocktail party problem', a situation in which a person finds it difficult to hear what is being said to them due to being in a noisy environment. Research by Harris Interactive has found that over 80 per cent of adults aged between 40 and 64 find it difficult to follow conversations in noisy cafes, pubs and restaurants. 

Orsana is a small, lightweight device that uses built-in microphones and advanced algorithms to separate speech from noise. To operate it, the user places the device on a table and clear speech will be delivered to the accompanying wireless earbuds.

AudioTelligence CEO Sue Handley Jones explained that most assistive listening devices use beamforming, an established technology that allows a microphone to pick up all the sound in a certain direction.

Orsana uses a different approach, starting with the use of Blind Source Separation (BSS) algorithms that separate individual speech sources from one another, and from the background noise, into separate channels.

“This patented technology uses maths and statistics to process the sound signals arriving at the microphones,” she said. “Apart from its ability to separate sound signals - not just directional sound - the big advantage of this is that it does not need to have any prior training, nor does it need to know the acoustic scene in the room. On its own, BSS can reduce some background noise as well.”

Another patented technology, Low Latency Noise Suppression (LLNS), is then applied. Handley Jones explained that many noise suppression technologies exist, but AudioTelligence wrote its own because of the need for low latency.

“Imagine how uncomfortable it would be to use a device like this and have the sound you hear through the earbuds out of sync with the lips of the person speaking,” she said. “Experiments have shown that we can only tolerate latencies of a maximum of 50ms.”

MORE FROM WEARABLE TECHNOLOGY

Conversational dynamics and signal content analysis, coupled with geometric source selection to select the appropriate source, makes sure the device sends the relevant voice to the earbuds.

In tests, Handley Jones said that at -5dB SNR (signal to nose ratio) a top of the range hearing aid improves speech understanding by up to 50 per cent; a competing assistive listening device by up to 80 per cent; and Orsana by up to 98 per cent.

“All of the above processing is done on the device itself – nothing is sent to the cloud, and nothing is recorded so there are no personal data issues, and you don’t need a WIFI or phone connection to make it work,” she said. “Operation is easy – LEDS show where the sound signals are located. In automatic mode the device will literally follow the conversation as each person speaks. They are touch sensitive and in focus mode the user presses them to select a source in a particular direction.”

Details of the crowdfunding campaign can be found at: https://igg.me/at/orsana.