Speech therapy

A US team aims to capture motor neuron sufferers' vocal brain signals to restore speech and movement

Severely disabled people with medical conditions such as amyotrophic lateral sclerosis — the most common form of motor neuron disease — are often confined to a wheelchair due to the degeneration of their motor neurons, the nerve cells in the central nervous system that control muscle movement.



This means that not only do they lack the ability to move but may also lose their speech.



Now, a small team of US engineers led by Michael Callahan and Thomas Coleman from the

University of Illinois at Urbana-Champaign

is developing a system it believes could ultimately restore both movement and speech to sufferers.



The system — dubbed The Audeo — is a sensory device that is placed around a patient's neck to intercept the signals from the brain that control vocal chords and the vocal tract.



These signals are sent to a computer that filters out the sound from any background noise using a signal processing algorithm that interprets the individual's meaning to produce speech. This can then either be output directly or used to control a wheelchair or other external devices.



'Some people with amyotrophic lateral sclerosis may be able to move their mouth a little, but they can't exhale sufficient air to produce audible speech. But since the speech signals are produced by the brain we can intercept them and create the speech for them,' said Coleman.





Motion control



The researchers have already demonstrated that they can use The Audeo as part of a system that allows a user to control a wheelchair. This works by capturing data from the brain and transmitting it to a control system running

LabVIEW

software, where it is processed and output as motion control signals to move the wheelchair.



To control the wheelchair, the system needs to identify directional commands — forward, right, left and stop. To do this, the software identifies those singular commands from the speech patterns rather than processing the data to produce continuous speech.



This part of the project, that would allow the user to hear what they are producing as they are doing so, is still under development. 'One of the challenges is to produce a universal mathematical transformation of the data that would work well for everyone — not just a specific individual,' explained Callahan



But work is well underway to make such a system a reality. Callahan and Coleman have already collected a large amount of data from numerous individuals and have developed a system around the software that can generate such transformation algorithms and evaluate their effectiveness on the fly.



As the developers discover the optimum processing algorithms, and fix the specific design parameters of the system, they intend to commit the entire acquisition and processing into hardware that will be self-contained in The Audeo. That way, there will be no need for an external computer in the system. The result, they say, will be not just one device, but a whole range that will target the needs of different users.



One such device, for example, might be dedicated to speech production, while another could be used to enable a disabled person to control a variety of different devices such as a wheelchair, computer or even a mobile phone.



To commercialise their technology the researchers have recently founded their own company, Ambient.