Device for the blind uses computer vision and machine learning
A new wearable device called Horus is using a combination of computer vision, machine learning and audio cues to improve the lives of visually impaired people.

Developed by a Swiss startup called Eyra, Horus consists of a headband with stereo cameras on one end that can recognise text, faces and objects. Information from the cameras is fed via a 1m cable into a smartphone-sized box containing a battery and a NVIDIA Tegra K1 processor. This provides GPU-accelerated computer vision, deep learning and sensors that process, analyse and describe the images from the cameras.
Feedback and instruction are delivered via bone conduction audio technology that allows the wearer to hear descriptions even in noisy environments. Similar technology has been developed by BAE Systems for the military and adapted for the Ben Ainslie Racing (BAR) America’s Cup team.
The user is able to activate different functionalities via intuitively shaped buttons on both the headset and the pocket unit. As well as learning and recognising objects and faces, and reading texts from flat and non-flat surfaces, Horus helps users navigate using audio cues. 3D sounds with different intensity, pitch, and frequency represent the position of obstacles, providing assistance in a similar way to parking sensors on a car.
Register now to continue reading
Thanks for visiting The Engineer. You’ve now reached your monthly limit of news stories. Register for free to unlock unlimited access to all of our news coverage, as well as premium content including opinion, in-depth features and special reports.
Benefits of registering
-
In-depth insights and coverage of key emerging trends
-
Unrestricted access to special reports throughout the year
-
Daily technology news delivered straight to your inbox
Experts speculate over cause of Iberian power outages
I´m sure politicians will be thumping tables and demanding answers - while Professor Bell, as reported above, says ´wait for detailed professional...