Roboticists are increasingly devising ways for machines to operate in situations involving extreme heat, cold, pressure and danger. Stuart Nathan and Jon Excell report
When NASA’s Spirit rover landed on the surface of Mars very early in the morning five years ago, the huge sigh of relief breathed by bleary-eyed technicians at Mission Control marked the start of another chapter in our use of robots to achieve the previously unachievable. And while industry tends to use robots for repetitive drudgery, the use of robotics to operate under extreme conditions presents a very different set of challenges.
Deep beneath the waves, in the far reaches of the cosmos, in the heat of the battlefield and the heart of a nuclear reactor, robots are there, extending human reach into distant and dangerous areas.
Space presents some of the biggest challenges to roboticists. Away from the familiar certainties of Earth, such as the force of gravity and established ranges of temperature and pressure, engineers have to cope with a whole set of operational parameters and some startlingly challenging constraints.
Robots that work away from Earth fall into two groups, explained Dave Barnes, head of space robotics at Aberystwyth University and a member of the team designing ESA’s ExoMars rover. There are those that work in space and there are planetary robots —almost always rovers of some type — that work on or near the surface of another planet. ‘There are some common challenges for both types, but their environments present different sets of problems,’ he said.
The most pressing of these, in all cases, are volume, mass and power consumption, which must all be minimised; for this reason, manipulator arms tend to be whippy, spidery constructions. As a result of their flexibility and the need to deal with inertia rather than weight, they are more difficult to control than industrial manipulators.
Also, added Barnes, a space roboticist must have a very philosophical outlook. ‘It’s not a case of if your robot will die, it’s when. You’re applying the best engineering practice possible to extend that lifetime.’
For planetary robots, knowing the rover’s whereabouts dominates mission planning. In most cases, the rover communicates with an orbiter, which, in turn, communicates with Earth. ‘But we want to minimise ground-based interaction and for that reason we need to make the robots autonomous,’ he said.
‘You have communications bottlenecks: you have to wait for the orbiter to be in range of Earth, then you have to wait again for the orbiter to be over the rover. Currently, every single, minute activity the rover has to undertake is transmitted to the planet’s surface and you also have to upload all the rover’s data. Greater autonomy will mean, for example, we can tell the rover to get a soil sample and leave the job of working out where the sample should come from and the kinematics of the manipulator to the robot,’ added Barnes.
While the Mars missions are the main concern for many roboticists, more distant goals are beginning to come into view. Around 2030, missions to the moons of Jupiter and Saturn will be launched. Titan is likely to be explored by aerobots, with a NASA/European Space Agency (ESA) project called TANDEM/TSSN, proposing a hot-air balloon with a science payload, along with a floating lander aimed at large lakes on the moon’s surface. A mission to icy Europa is looking at a nuclear-powered ‘cryobot’ to melt its way through the ice and then release a submersible aquabot to explore the oceans that are believed to lay underneath. For these distant worlds, autonomy may be even more vital.
Still in the aerospace field, although somewhat closer to Earth, military scientists are also keen to endow the next generation of unmanned aerial vehicles (UAVs) with increasing levels of autonomy. In the UK, BAE Systems is attempting to address precisely these challenges with the development of Taranis, a highly autonomous unmanned combat aerial vehicle (UCAV) that could, in theory, strike deep behind enemy lines with minimal user intervention. Although many of the finer details are still under wraps, Taranis has now been built, with ground tests planned later this year and flight tests expected early in 2010.
However, while autonomy may provide a tactical advantage for stealthy unmanned aircraft, it is not always the goal with military robots.
Indeed, the most widely used military robotic platform, the familiar tracked devices used for bomb disposal and reconnaissance, are typically remotely operated by troops.
One example of these systems is Qinetiq’s Talon robot. Developed by Foster Miller, Qinetiq’s US arm, 2,500 of these rugged robots have already been deployed by US forces in Afghanistan and Iraq and more are on the way.
The capabilities of these systems are improving all the time. The latest model, the Talon IV Engineer robot, is stronger than its predecessors (it can lift 30kg), can right itself if tipped over and is fitted with an arm with a 7ft (2m) reach.
While most of the robots deployed by US forces have been used for mine-clearance and bomb-disposal purposes, many believe that it is only a matter of time before robots actively engage in combat.
Since 2007, three weaponised Special Weapons Observation Remote Direct-Action System (SWORDS) robots have been patrolling an area south of Baghdad and, although they have not yet fired a shot in anger, a more advanced system is on the way.
The Modular Advanced Armed Robotic System (MAARS) is another remotely controlled system, able to operate as far as a kilometre from its user. According to Qinetiq, it can launch a range of ammunition and is fitted with an M240B medium machine gun.
Weaponised robots may be just around the corner, but even further into the future, researchers envisage the development of everything from swarming insect-inspired reconnaissance robots to morphing ‘chembots’ that exploit the properties of exotic materials such as shape memory polymers to squeeze through tight spaces.
The military’s concern about human control on the ground is echoed in the nuclear industry. ‘It’s just like surgery,’ said Rob Buckingham, managing director and co-founder of OC Robotics, whose snake-arm robots perform maintenance tasks in Canadian-designed CANDU reactors. ‘You want to keep a human in the loop; people make good decisions and agree on how to solve problems. Robot autonomy makes no sense in the nuclear sector, so what we develop are teleoperated remote manipulators,’ he said.
Nuclear robots, particularly OC’s snake arms, perform tasks within the reactor where it would be impossible — or at least very difficult — for people to work. However, as there are relatively few nuclear reactors around the world — and they tend to be built at long intervals — there is no standard.
‘Even within a specific type of reactor, such as a CANDU, the details of the build and the layout are all different. We have a consistent architecture for our robots, in terms of the safety systems, the electronics and software, but, to an extent, everything we make has to be a custom build,’ added Buckingham.
Radiation shielding is rarely a problem, he said. ‘It depends on the dosage. Increasingly, we’re seeing a demand for robots to work where humans could go, but it would be expensive to send them there. If you know a person could work in an area for a day, then you know your electronics are going to be okay. It’s only in the really high-dose areas you need to radiation-harden your systems.’
Even then, it is not always practical to shield components. Cameras, for example, will fog and fail in a radiation zone, but it is cheaper to simply replace them than to develop a radiation-hardened camera.
Even where humans could theoretically operate, it is often cheaper and safer to use robots, Buckingham explained. If a person can only work in an area for a day, they are not going to have a chance to become proficient at their task, whereas a robot can simply be programmed. ‘There’s a cost associated with exposing a human to a dose of radiation — and it’s high,’ he said. ‘That cost is used to determine how much plant operators can spend on reducing the dosage to people.’
Similarly, economics ensures that reactors are confined to difficult spaces. ‘The size of the reactor is an important driver of the cost of the power station. You want to keep that bit as small as possible, because if you make it bigger, everything else has to be bigger too. That means that you have to minimise space inside the reactor to make it compact and that means that maintenance is difficult. There’s a balance of cost of design and build versus cost of maintenance and it’s in our favour,’ he said.
Yet, while nuclear keeps the human option for many activities, in some areas robots are often the only option.
In the offshore industry — an astonishing range of machines, both autonomous and remotely operated — carry out a range of tasks from pipeline inspection to cable laying and, while the autonomous underwater vehicles (AUVs) increasingly used for pipeline inspection are, arguably, doing a job that was once done by divers, the machines used to bury cables and pipelines operate at depths and pressures that rule out human labour.
Although the challenges raised by the ocean’s depths are often compared to those faced in space, it is not autonomy that characterises these robots; it is size. They do not come much bigger than the Ultra Trencher: a huge UK-developed submersible remotely operated vehicle (ROV) designed to bury oil and gas pipelines on the ocean floor.
Weighing in at 50 tons, carrying a £10m price tag and soaking up 2MW of power, the house-sized machine is thought to be the largest deep-sea remote-controlled robot ever built.
Developed by Tyne and Wear-based offshore ROV specialist Soil Machine Dynamics (SMD), UT-1 is remotely operated from a surface vessel. Using huge thrusters to steer down to the pipeline site, the robot moves along the sea bed at a speed of 2-3kt using a pair of so-called ‘jet swords’ to inject pressurised water into the sediment and create a trench. According to SMD, the Ultra Trencher is able to dig pipelines with a 1m diameter in water up to 1,500m deep.
The machine is currently installed on the Volantis, a specialist cable-laying vessel operated by subsea contractor CTC Marine.
The vessel is currently on standby in Teeside and is due to head to Egypt in April, where it will begin laying a pipeline for Petrobel’s North Bardawil project.
Although the offshore oil and gas industry is by far the biggest user of subsea robots, the technology is beginning to filter through to other areas. The capabilities of autonomous inspection vessels are proving increasingly attractive for defence applications, while some are even eyeing up the potential of underwater robots for search-and-rescue (SAR) applications (see The Engineer, 10 November 2008).
While we may be some years away from the prospect of robot subs rescuing stricken mariners, SAR is an area that many believe is crying out for robots.
One of the pioneers of this area is Prof Robin Murphy of Texas A&M University. Heading up the centre for robot-assisted SAR (CRASAR), Murphy has been instrumental in getting robots out of the lab and putting them through their paces in disaster scenarios.
Murphy began working on SAR following the 1995 Kobe earthquake and Oklahoma city bombings. Rather than build her own robots, she prefers to use rugged off-the-shelf machines. ‘SAR is an extreme environment,’ she said. ‘There’s heat, dust and dirt, water from the sewer systems and potentially a lot of body fluids. Teaming up with industry is the way to go.’
Her specially adapted robots have crawled, climbed and buzzed their way around disaster sites from Ground Zero to Hurricane Katrina and, although there has not yet been an incident where a robot has actually saved a human’s life, Murphy believes that it is only a matter of time.
She also prefers to think of robots as extensions of humans rather than autonomous systems and most of her work has been on remote vehicles that operate in spaces too confined and dangerous for human rescuers. ‘It’s all about projecting the human into this situation. You need mobility to get them in there, good sensors and you need to have at least one video camera with pan, tilt and zoom, because in urban SAR you’re often going into confined spaces where there’s no room to turn the robot,’ she said.
Murphy does not envisage the development of a single generic SAR robot, but she believes that the future will see the development of a host of systems optimised for different disasters.
This is borne out by her own experience: following hurricane Katrina, Murphy’s team used small unmanned helicopters to survey structural damage, while tracked terrestrial robots were sent to search for survivors in the wreckage of the World Trade Center.
‘I think we’ll see whole families. In ground robots, the traditional tracked vehicles are very good, but snake robots are going to be a very important family in the future for SAR. We’ve also been working with unmanned surface vehicles [robot rafts] to inspect damage to bridges and sea walls,’ she said.
Where one extreme environment takes from industry, others give back. The sensors developed for space and planetary applications often find their way into industrial applications, with their compact and robust forms perfectly suited for corrosive and hazardous atmospheres.
Miniaturisation also reaps rewards. Barnes worked on the ill-fated Beagle 2 project, but, although the lander itself crashed, its science payload has formed the basis for life-saving technology.
‘Before we started working on Beagle 2, equipment such as mass spectrometers would have filled a bench or even half a room,’ said Barnes.
‘The team got it down to briefcase size,’ he added. This compact GC-MS machine was funded by the Wellcome Foundation and could soon be used to diagnose TB in Africa (see The Engineer, 29 January 2008).
Innovation flourishes under extreme conditions. In devising ways to operate in heat, cold, pressure and danger, roboticists are increasingly finding ways of increasing our knowledge and saving our lives.