US developed technology translates thoughts directly into speech

Researchers at Columbia University in New York have created a system that translates thoughts directly into recognisable speech.

The neuroengineers behind the system, from the university’s Zuckerman Institute, claim that it marks a major step towards the development of brain-computer interfaces for patients with limited or no ability to speak such as those living with motor neurone disease or recovering from stroke.

Research has shown that when people speak - or even imagine speaking - telltale patterns of activity appear in their brain. Distinct (but recognisable) pattern of signals also emerge when we listen to someone speak, or imagine listening.

Early efforts by the Columbia team to decode these brain signals and translate them into words focused on simple computer models that analysed spectrograms, which are visual representations of sound frequencies. However, this approach failed to produce anything resembling intelligible speech, so the team - led by Dr Nima Mesgarani - turned instead to a vocoder, a computer algorithm that can synthesise speech after being trained on recordings of people talking.  "This is the same technology used by Amazon Echo and Apple Siri to give verbal responses to our questions," said Dr Mesgarani.

Register now to continue reading

Thanks for visiting The Engineer. You’ve now reached your monthly limit of news stories. Register for free to unlock unlimited access to all of our news coverage, as well as premium content including opinion, in-depth features and special reports.  

Benefits of registering

  • In-depth insights and coverage of key emerging trends

  • Unrestricted access to special reports throughout the year

  • Daily technology news delivered straight to your inbox