Researchers in the US have developed technology for controlling robots with brainwaves and hand gestures
The group, from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), claims that its system could pave the way for a new, more intuitive way of programming robots and allow users to instantly correct robot mistakes with nothing more than brain signals and the flick of a finger.
The system allows a human supervisor to correct mistakes using gestures and brainwaves (Credit: Joseph DelPreto, MIT CSAIL)
By monitoring brain activity, the system can detect in real time if a person notices an error as a robot does a task. Using an interface that measures muscle activity, the person can then make hand gestures to scroll through and select the correct option for the robot to execute.
To create the system the team harnessed the power of electroencephalography (EEG) for brain activity and electromyography (EMG) for muscle activity, putting a series of electrodes on the users’ scalp and forearm. The group claimed that merging these two metrics allows for more robust bio-sensing than previously possible.
For the project – which was part funded by Boeing – the team demonstrated the system on a task in which a Baxter collaborative robot supplied by Rethink Robotics moves a power drill to one of three possible targets on the body of a mock plane. With human supervision, the robot went from choosing the correct target 70 per cent of the time to more than 97 per cent of the time.
“This work combining EEG and EMG feedback enables natural human-robot interactions for a broader set of applications than we’ve been able to do before using only EEG feedback,” said CSAIL director Daniela Rus. “By including muscle feedback, we can use gestures to command the robot spatially, with much more nuance and specificity.”
In most previous work, systems could generally only recognise brain signals when people trained themselves to “think” in very specific but arbitrary ways and when the system was trained on such signals. But Rus’ team harnessed the power of brain signals called “error-related potentials” (ErrPs), which researchers have found to naturally occur when people notice mistakes. If there’s an ErrP, the system stops so the user can correct it; if not, it carries on.
This means that there’s no need to train users to think in a prescribed way and that the system could be relatively easy to deploy.
Rus said that the system could one day be useful for the elderly, or workers with language disorders or limited mobility. “We’d like to move away from a world where people have to adapt to the constraints of machines,” she said. “Approaches like this show that it’s very much possible to develop robotic systems that are a more natural and intuitive extension of us.”