A transatlantic research team is developing advanced algorithms for decoding neural activity into physical commands, such as parameters for controlling a robotic arm.
Engineers from Cambridge University, along with neuroscientists at Stanford University in the US, believe that current decoding approaches are not capable of producing a clinically viable prosthetic device with the speed and accuracy comparable to a healthy human arm.
Principal investigator Zoubin Ghahramani, a professor of information engineering at Cambridge, said the challenge is that neural prosthetic designers do not completely understand how movements are represented in the brain.
‘Neurons are noisy information channels,’ he said. ‘So you get activity from many, many neurons spiking and it is a challenge to infer the desired action and direction of movement.
‘There have been advances in the field over the last decade or so but the methods people have used have generally been fairly simple linear filtering methods for decoding neural activities.
‘The main thing we’re hoping to contribute is much more advanced machine-learning methods.’
The £410,000 EPSRC-funded research project will create an intelligent algorithm that is more adaptive than current decoding mechanisms.
Ghahramani explained that adaptability is important because the recordings from electrodes change. ‘The electrodes might drift or the neural wiring of the brain changes gradually over time,’ he said.
The algorithm will be tested with neural prosthetic devices implanted in primates before human trials.
Ghahramani said those working in the area of neural prosthetics are encouraged by the success of technology such as cochlear implants, which help the hearing impaired by applying electrical stimulation in the auditory system.
‘The field of neural implants is moving quite rapidly but the idea of having brain signals control previously paralysed bodies will take a bit longer,’ he said.