Robots and ethics: a complex question

stuart thumbnail

Stuart Nathan

Features editor

 

The use by police in Dallas of a military-grade robot and explosives to kill a sniper who was targeting police officers last week has focused attention on how robots should be used and whether this particular action was ethical.

Police chief David Brown took the decision to use Northrop Grumman's Remotec Andros military robot to kill Micah Johnson, who had been targeting police officers working at a protest in Dallas city centre. The ethics of this is a complex question with many aspects to it.

We in the UK must be mindful that policing practices in other countries differ considerably from ours. Had such a situation arisen here (unlikely because of the relative scarcity of firearms, but not impossible) it’s surely unlikely that the police would have access to equipment like the  Andros robot used by the Dallas police: they would have had to call in the Army. The robot in question is not, of course, in itself a weapon; it's designed and used primarily to assist in the removal and defusing of bombs in combat zones, though it can be adapted to carry shotguns and other weaponry.

A Northrop-Grumman Remotec Andros F6A robot of the typ used to kill sniper Micah Johnson in Dallas last week
A Northrop-Grumman Remotec Andros F6A robot, similar to the type used to kill sniper Micah Johnson in Dallas last week

Even in the US, with its much more militarised police forces, the shock hasn’t been so much the use of the robot as what was done with it: responses such as ‘The police have bombs?’ have been common. In fact it wasn’t a bomb as much as half a kilo of C4 plastic explosive, but it’s still not necessarily something a police force might be expected to have in its armoury.

As to the use of a robot, in this case the ethical issues are really unquestionable. The shooter had already killed two people, claimed to have explosives and had taken up a defensible position; he turned out to have military training. Approaching him would clearly have been a life-threatening risk. The decision had been taken to neutralise him rather than try to arrest him: that’s the point where ethical arguments have to focus. The use of a remote-controlled robot as the means to carry out that neutralisation was only sensible. We can argue about the use of explosives; and the decision to use lethal force, but it’s baffling why the robotic means of delivery should be controversial at all.

In part, this is because of the nature of the Remotec system. It’s fully radio-controlled with no autonomous features. In essence, it’s just a way of extending the reach of its operator; to employ some reductio ad absurdum, it’s no different from attaching a lump of C4 to the hook on some fishing line and using a long fishing rod to dangle it over the sniper’s hiding place. All of the decision-making was in human hands the whole time.

Even if the robot had been autonomous, this was simply a matter of not putting humans needlessly in harm’s way, and according to robotics ethicists like Professor Darwin Caldwell, director of advanced robotics at the Italian Institute of Technology in Genoa, this is a classic situation where the use of robots would always be justified.

It’s easy to make calls like this when the system always has a human in the loop and can’t function without it, Caldwell said at a recent seminar on robot ethics at the Royal Academy of Engineering. The difficulty comes in deciding whether this particular situation sets a precedent, and if so, what precedent does it set? Does it, for example, justify sending in autonomous systems wherever it would be too risky for a human to intervene? What if the suspect isn’t isolated? It would be a brave commander indeed who would send in a robot in this case, where a computer would have to make targeting decisions (based on what? Machine vision, facial detection and recognition? We’re surely in the realm of science fiction now).

There’s no doubt that ethics are becoming increasingly important in robotics, and also stretching our definition of what a robot is (a fully autonomous vehicle must be considered as a robot, even if it wouldn’t meet many people’s definition). If autonomous vehicles make decisions about the safety of their occupants, then ethical considerations have to be made – if every decision places humans in potential danger, is the occupants’ safety to be prioritised over the safety of occupants of other vehicles? What about pedestrians or cyclists? What about manoeuvres which put the occupants in the back seats in more danger than those in the front? These are situations where engineers would have to make decisions about how the technology would be configured or programmed.

Aircraft like the Eurofighter Typhoon should be considered as robots, as they cannot fly safely without computer intervention, argues Prof Darwin Caldwell (Credit: Bundesheer/Zinner)
Aircraft like the Eurofighter Typhoon should be considered as robots, as they cannot fly safely without computer intervention, argues Prof Darwin Caldwell (Credit: Bundesheer/Zinner)

Decisions over life and limb aren’t the only ones where ethics need to be considered; there are increasing worries about how automation and robotics impact on employment. At the RAEng seminar, Prof Caldwell spoke about agricultural robots, saying that in his view they were entirely ethical: farm work being arduous, sometimes dangerous and requiring work at antisocial hours, using robots is simply a matter of improving farmers’ quality of life, he argued. The many people who might otherwise be employed as farm labourers might well disagree.

This is clearly a subject that needs careful study; and if that takes engineers out of their comfort zone of "we just build it; it's not down to us how it's used", then so much the better.