More in

Eye, robot

Computer models of human visual responses will help robots get their priorities right.

Multi-tasking robots capable of stopping what they are doing and responding to events in the corner of their ‘eye’ are being developed in a project involving

.

The project, which began this week, is aimed at developing technologies to create robots with the ability to switch between tasks in the same way as animals and humans.

Researchers at BAE and UK universities including Sheffield, Manchester, and Cambridge are attempting to unravel the mechanisms the vertebrate brain uses to multi-task seemingly effortlessly. This research will be used to build computational models that can control the behaviour of a wide range of robots.

These could include devices for going into dangerous, hostile or inaccessible environments, monitoring the environment and human performance for signs of danger or assisting the disabled, aged or infirm. Each of these tasks calls for a robot that can adapt to and learn from unpredictable circumstances, said Dr Kevin Gurney, the project leader and lecturer in computational neuroscience at Sheffield.

Register now to continue reading

Thanks for visiting The Engineer. You’ve now reached your monthly limit of news stories. Register for free to unlock unlimited access to all of our news coverage, as well as premium content including opinion, in-depth features and special reports.  

Benefits of registering

  • In-depth insights and coverage of key emerging trends

  • Unrestricted access to special reports throughout the year

  • Daily technology news delivered straight to your inbox