Researchers are working to enable smartphones and other mobile devices to understand and immediately identify objects in a camera’s field of view, overlaying lines of text that describe items in the environment.
‘It analyses the scene and puts tags on everything,’ said Eugenio Culurciello, an associate professor in Purdue University’s Weldon School of Biomedical Engineering and the Department of Psychological Sciences.
According to the University, the innovation could find applications in augmented reality technologies like Google Glass, facial recognition systems and autonomous cars.
‘When you give vision to machines, the sky’s the limit,’ Culurciello said in a statement.
The concept is called deep learning because it requires layers of neural networks that mimic how the human brain processes information. Internet companies are using deep-learning software, which allows users to search the Web for pictures and video that have been tagged with keywords. Such tagging, however, is not possible for portable devices and home computers.
‘The deep-learning algorithms that can tag video and images require a lot of computation, so it hasn’t been possible to do this in mobile devices,’ said Culurciello, who is working with Berin Martini, a research associate at Purdue, and doctoral students.
The research group has developed software and hardware and shown how it could be used to enable a conventional smartphone processor to run deep-learning software.
The new deep-learning capability represents a potential artificial-intelligence upgrade for smartphones and research findings have shown the approach is about 15 times more efficient than conventional graphic processors, and an additional 10-fold improvement is possible.
The deep learning software works by performing processing in layers. ‘They are combined hierarchically,’ Culurciello said. ‘For facial recognition, one layer might recognise the eyes, another layer the nose, and so on until a person’s face is recognised.’
In use, Deep learning could enable the viewer to understand technical details in pictures. ‘Say you are viewing medical images and looking for signs of cancer,’ he said. ‘A program could overlay the pictures with descriptions.’
The research, which has led Culurciello to establishment of a compay called TeraDeep, received funding from the US Office of Naval Research, National Science Foundation and Defense Advanced Research Projects Agency.