Brain activity decoded to deliver synthetic speech
Neuroscientists in the US have developed a virtual vocal tract that produces accurate synthetic speech using decoded brain activity, opening the door for future speech prosthetics.
People who have lost speech due to stroke, brain injury or neurological disease currently rely on painfully slow synthesisers that track eye movement to spell out words. Perhaps best associated with the late Stephen Hawking, these technologies are limited to around 10 words per minute. By comparison, natural speech is in the region of 100-150 words per minute.
The researchers, based out of UC San Francisco (UCSF), established in a previous study how the brain’s speech centres encode movements for the lips, jaw, and tongue rather than direct acoustic information. For the latest work, they first gathered recordings of patients reading sentences while also logging the corresponding brain activity. Using linguistic principles, they then reverse engineered the vocal movements required to produce the sounds and mapped them to the brain activity associated with those sounds. This allowed the team to create a realistic virtual vocal tract for each participant that could be controlled by their brain activity. The work, published in Nature, could pave the way for devices that replicate natural speech in real-time.
Register now to continue reading
Thanks for visiting The Engineer. You’ve now reached your monthly limit of news stories. Register for free to unlock unlimited access to all of our news coverage, as well as premium content including opinion, in-depth features and special reports.
Benefits of registering
-
In-depth insights and coverage of key emerging trends
-
Unrestricted access to special reports throughout the year
-
Daily technology news delivered straight to your inbox
Experts speculate over cause of Iberian power outages
I´m sure politicians will be thumping tables and demanding answers - while Professor Bell, as reported above, says ´wait for detailed professional...