Speech prosthetic turns thoughts into words

A speech prosthetic developed by engineers, neuroscientists, and neurosurgeons translates a person’s brain signals into what they are trying to say.

Compared to current speech prosthetics with 128 electrodes (left), Duke engineers have developed a new device that accommodates twice as many sensors in a significantly smaller footprint - Dan Vahaba/Duke University

This is the claim of a team at Duke University in the US who believe the device could eventually help people unable to talk due to neurological disorders regain the ability to communicate through a brain-computer interface. The work is detailed in Nature Communications.

“There are many patients who suffer from debilitating motor disorders, like ALS [amyotrophic lateral sclerosis] or locked-in syndrome, that can impair their ability to speak,” said Gregory Cogan, PhD, a professor of neurology at Duke University’s School of Medicine and one of the lead researchers involved in the project. “But the current tools available to allow them to communicate are generally very slow and cumbersome.”

A current neuroprostheses decodes vocabulary at about 78 words per minute, but people tend to speak around 150 words per minute.

The lag between spoken and decoded speech rates is partially due to the relatively few brain activity sensors that can be fused onto a paper-thin piece of material that lays on the surface of the brain. Fewer sensors provide less decipherable information to decode.

To improve on past limitations, Cogan teamed up with fellow Duke Institute for Brain Sciences faculty member Jonathan Viventi, whose biomedical engineering lab specialises in making high-density, ultra-thin, and flexible brain sensors.

For this project, Viventi and his team placed 256 brain sensors onto a postage stamp-sized piece of flexible, medical-grade plastic. According to Duke, neurons can have markedly different activity patterns when coordinating speech, so it is necessary to distinguish signals from neighbouring brain cells to help make accurate predictions about intended speech.

MORE FROM MEDICAL & HEALTHCARE

After fabricating the implant, Cogan and Viventi teamed up with Duke University Hospital neurosurgeons who helped recruit four patients to test the implants. The experiment required the researchers to place the device temporarily in patients who were undergoing brain surgery for some other condition, such as treating Parkinson’s disease or having a tumour removed.

The task was a simple listen-and-repeat activity. Participants heard a series of nonsense words, like “ava,” “kug,” or “vip,” and then spoke each one aloud. The device recorded activity from each patient’s speech motor cortex as it coordinated nearly 100 muscles that move the lips, tongue, jaw, and larynx.

Afterwards, Suseendrakumar Duraivel, the first author of the new report and a biomedical engineering graduate student at Duke, took the neural and speech data from the surgery suite and fed it into a machine learning algorithm to see how accurately it could predict what sound was being made, based only on the brain activity recordings.

For some sounds and participants, like /g/ in the word “gak,”  the decoder was correct 84 per cent of the time when it was the first sound in a string of three that made up a given nonsense word.

Accuracy dropped as the decoder parsed out sounds in the middle or at the end of a nonsense word, and it also had difficulties when two sounds were similar, like /p/ and /b/.

The decoder was accurate 40 per cent of the time, which is encouraging because similar brain-to-speech technical feats require hours or days-worth of data to draw from. The speech decoding algorithm Duraivel used worked with 90 seconds of spoken data from the 15-minute test.

recent $2.4m grant from the US National Institutes of Health will allow Duraivel and the team to make a cordless version of the device.