The idea of Google being able to read your mind might be a scary prospect but users of Google Glass will now be able to control the head-mounted computer just by thinking, thanks to a new “telekinetic” app launched today.
The free, open-source “MindRDR” software connects Google Glass with a low-cost brainwave-reading headset that enables users to operate the device by concentrating instead of controlling it with voice commands or by tilting the head.
Though the app can so far only take photos and publish them to the internet – and involves wearing two slightly awkward headsets – it could point to a future generation of touch-free interfaces for consumer technology that don’t need users to wave their hands around or talk embarrassingly to an inanimate object.
It could also lead to a way for those with limited ability to move or speak – such as sufferers of locked-in syndrome – to more easily communicate with the outside world.
‘Most interfaces require quite a high level of dexterity or for you to communicate verbally in order to use them, so a mind-control interface has the opportunity to bring digital to those who may not be able to use those,’ said Ben Aldred, director of This Place, the London-based digital consultancy firm behind MindRDR.
‘People who don’t have control of their voice or their hands should still be able to open the digital world like everyone else can. And it may be able to help them interact with the world better.’
MindRDR doesn’t actually read people’s thoughts but relies on a £70 portable headset device developed by California-based NeuroSky that measures the brain’s electroencephalogram (EEG) activity, in particular the electrical signals produced when a person is concentrating.
The new app allows Google Glass to collect the EEG data output via a Bluetooth connection and use it as a signal to control a line on the device’s screen. When a user concentrates hard enough, the line moves to the top of the screen and activates the camera. In principle, the app could work with any EEG input and computer.
‘The first step was connecting the two devices together,’ said Aldred. ‘The next bit was taking the [EEG] signals and making sense of them, seeing if we could use them to control anything on the Glass device.’
The challenge, he added, was in calibrating the app so it could interpret the range of brain activity as a scale rather than as a simple on/off signal – enabling users to make the line on the screen rise and fall depending on how hard they concentrated.
The last few years has seen the growth of touch-free interfaces such as the motion-capturing Microsoft Kinect and Apple’s Siri, which responds to voice commands. MindRDR represents a way for users to bypass such interfaces and connect a computer directly to their brain activity.
Google Glass, which has yet to launch as a commercial product and currently costs £1,000 for a software developer’s version, is operated by tilting the head, swiping a hand against the side of the device, or via a voice-control system the user activates by saying “OK Glass”.
This Place’s creative director Chloe Kirston said a brainwave reader could provide an alternative (although she hesitated to say better) way of controlling a device like Glass, that at least wouldn’t result in a tired neck or arm.
‘Right now I don’t think you’d want to think and scroll down your Facebook feed but the fact that you can is exciting and to add it to that raft of ways we have of interacting is really awesome,’ she said.
Although the NeuroSky device doesn’t actually read user’s thoughts and the MindRDR app doesn’t connect the user’s brainwaves directly to the Google operating system (Google has had no involvement in the project), Kirton said she understood why the technology might raise concerns over privacy. And she admitted to being unnerved when she realised a colleague could see her pulse rate while she was wearing the device.
‘I definitely see how it could be scary to some people,’ she said. ‘We need to ensure there’s control over it because I would hate to see that stop progress. I hope we can create an environment where people don’t have to fear how their data is used.’
This Place has yet to determine how the technology could help disabled users, and admits that it’s not as fast as the eye-tracking systems used by the likes of Stephen Hawking to control speech-generating computers. But the firm now wants to work with the medical technology community to explore possible uses for MindRDR.
What’s it like?
Concentrating on demand, it turns out, is harder than it sounds. Once I’d navigated the tricky process of putting on both the Google Glass and NeuroSky headsets (I gave up on wearing my actual glasses at the same time), making the MindRDR app work took some practice.
At first I tried concentrating on the cushion in front of me, thinking hard about its shape, its position, the fact that it was a “cushion”. But despite some straining – and peculiar faces – I only managed to get the line on the Glass screen half way to the top.
Then I was advised to run over my eight times table in my head. Though it felt like I wasn’t concentrating as hard, settling my brain into more of a zen-like state, I only got to four before I heard a ping, and the camera went off.
The next time it was easier, and I only reached two times eight before I succeeded in moving the line. Excited, I began to tell those around me that I’d mastered it. But then I realised I was setting the camera off without even trying. In fact I couldn’t stop it.
This is a remarkable technology in many ways, primarily thanks to the incredibly low-cost and compact NeuroSky brainwave reader (though I still can’t see many people wanting to walk down the street wearing it). The challenge for This Place, and those who follow them, will be finding a way to give users a much greater level of control over how it works.