Device for the blind uses computer vision and machine learning

A new wearable device called Horus is using a combination of computer vision, machine learning and audio cues to improve the lives of visually impaired people.

Horus device render
(Credit: Eyra)

Developed by a Swiss startup called Eyra, Horus consists of a headband with stereo cameras on one end that can recognise text, faces and objects. Information from the cameras is fed via a 1m cable into a smartphone-sized box containing a battery and a NVIDIA Tegra K1 processor. This provides GPU-accelerated computer vision, deep learning and sensors that process, analyse and describe the images from the cameras.

Feedback and instruction are delivered via bone conduction audio technology that allows the wearer to hear descriptions even in noisy environments. Similar technology has been developed by BAE Systems for the military and adapted for the Ben Ainslie Racing (BAR) America’s Cup team.

The user is able to activate different functionalities via intuitively shaped buttons on both the headset and the pocket unit. As well as learning and recognising objects and faces, and reading texts from flat and non-flat surfaces, Horus helps users navigate using audio cues. 3D sounds with different intensity, pitch, and frequency represent the position of obstacles, providing assistance in a similar way to parking sensors on a car.

Horus can also be prompted to give a short audio description of what the cameras are seeing, whether that is a room full of people, a photograph or a landscape.

“Seeing the faces of people who try Horus for the first time drives our passion,” said Saverio Murgia, CEO and co-founder of Eyra. “It shows we’re making a real difference in people’s lives.”

The device, which is estimated to cost around $2,000, has already begun trials with the Italian Union of Blind and Partially Sighted People. Feedback from these tests will be used to refine the technology, with Horus expected to have a wider release at some point next year.