British Robotics Experts are researching and developing new methods of presenting information gathered by remote-control vehicles working in hazardous environments. The aim is to give the vehicle’s operator much richer data from the scene, which could range from a battlefield to a sewer, and enable tasks to be carried out more quickly and efficiently.
The key to the research is a concept called augmented reality, a fusion of real-world images together with other relevant information which supplements the data from the images.A simple example of augmented reality is the score being permanently displayed in television coverage of, say, a cricket match. The Surrey researchers, however, plan to augment the robot’s video pictures with much more complex information, overlaying them with stored computer images, for example.
Team member of the Mechatronic Systems and Robotics Research Group at the University of Surrey, Dr Shaun Lawson commented ‘for years people have been trying to develop totally autonomous robots, but to give them the intelligence to deal with all possible scenarios is extremely difficult, our approach is the put the human back into the system to do what it does best, which is to interpret data.’
The robots being developed by the Surrey team are equipped with two video cameras mounted on an articulating mechanical neck. The two streams of video data coming from the cameras are combined to create a three-dimensional image.However, augmenting the video pictures with virtual reality images is one of the most difficult challenges faced by the team.
For the sewer pipe inspection robot, for example, a graphical representation of the pipe could be stored in the system and superimposed over the real images being sent by the cameras.
A battlefield robot could be equipped with an array of sensors, to detect radiation or poisonous gases. This information could then be layered onto the video images, giving the operator a much more detailed picture of the scene.