Viewpoint
UK roboticist Prof Noel Sharkey calls for a pre-emptive ban on the deployment of autonomous weapons
This is a call to engineers to stand up and demand the prohibition of autonomous lethal targeting by robots. I ask this of engineers because you are the ones who know just how limited machines can be when it comes to making judgments; judgments that only humans should make; judgments about who to kill and when to kill them.
Warfare is moving rapidly towards greater automation where hi-tech countries may fight wars without risk to their own forces. We have seen an exponential rise in the use of drones in Iraq and Afghanistan and for CIA targeted killings and signature strikes in Pakistan, Yemen and Somalia. Now more than 70 states have acquired or are developing military robotics technology.
The current drones are remote-controlled for use against low-tech communities in a permissive air space. With a delay time of 1.5 to 4 seconds from moving the joystick to motor response, air-to-air combat or avoiding anti-aircraft fire are problematic. Moreover, technologically sophisticated opponents would adopt counter strategies that could render drones useless by jamming communications.
Fully autonomous drones that seek and engage targets without communicating with an operator are not restricted by human G-force or response-time limitations allowing sharp turns and maneuvers at supersonic speeds. So taking the human out of the control loop has been flagged as important by all of the US military roadmaps since 2004. This would enable the US to lower the number of required personnel in the field, reduce costs, and decrease operational delays. But they also fundamentally reduce our humanity.
The UK company BAE systems will be testing its Taranis intercontinental autonomous combat aircraft in Australia this Spring. The US has been testing the fully autonomous subsonic Phantom Rayand the X-47b, due to appear on US aircraft carriers in the Pacific around 2019. Meanwhile, the Chinese Shenyang Aircraft Company is working on the Anjian supersonic unmanned fighter aircraft, the first drone designed for aerial dogfights.

The US HTV-2 program to develop armed hypersonic drones has tested the Falcon at 13,000 mph. The aim is to reach anywhere on the planet with an unmanned combat aircraft within 60 minutes. The hypersonic fully autonomous drones of the future would create very powerful weapons capable of making decisions well outside the speed of plausible human intervention.
A big problem is that autonomous weapons would not be able to comply with International Humanitarian Law (IHL) and other safeguards necessary to protect civilians in armed conflict. There are no computer systems capable of distinguishing civilians from combatants or making intuitive judgments about the appropriateness of killing in the way required by the Principle of Distinction. Machines of the future may be capable of some types of discrimination, but it is highly unlikely that they will have the judgment, reasoning capabilities or situational awareness that humans employ in making proportionality assessments. And accountability for mishaps or misuse is a major concern, as so many different groups would be involved in the production and deployment of autonomous weapons.
The US Department of Defense directive on “autonomy in weapons systems”(November 2012) that “once activated, can select and engage targets without further intervention by a human operator”, seeks to assures us that such weapons will be developed to comply with all applicable laws. But this cannot be guaranteed and it green lights the development of machines with combat autonomy.

The US policy directive emphasizes verification and testing to minimize the probability of failures that could lead to unintended engagements or loss of control. The possible failures listed include “human error, human-machine interaction failures, malfunctions, communications degradation, software coding errors, enemy cyber attacks or infiltration into the industrial supply chain, jamming, spoofing, decoys, other enemy countermeasures or actions, or unanticipated situations on the battlefield”.
How can researchers possibly minimize the risk of unanticipated situations? Testing, verification and validation are stressed without acknowledging the virtual impossibility of validating that mobile autonomous weapons will “function as anticipated in realistic operational environments against adaptive adversaries”. The directive fails to recognize that proliferation means we are likely to encounter equal technology from other powers. And as we know, if two or more machines with unknown strategic algorithms meet, the outcome is unpredictable and could create unforeseeable harm for civilians.
The bottom line is that weapon systems should not be allowed to make decisions to select human targets and engage them with lethal force. We need to act now to stop the kill function from being automated. We have already prohibited chemical weapons, biological weapons, blinding lasers, cluster munitions and antipersonnel landmines.
We now need a new international treaty to pre-emptively ban fully autonomous weapons.
I call on you to sign our call for a ban at http://icrac.net/ before too many countries develop the technology and we venture down a path from which there is no return.
Noel Sharkey is professor of artificial intelligence and robots at the University of Sheffield
I absolutely agree with Professor Sharkey.
As noted in a separate discussion quite recently, we are one step closer to “Sky Net”. Perhaps this is another step. If so, a ban on autonomous decision-making is insufficient. It seems to me that we are now at the point where we should be adopting Isaac Asimov’s three laws of robotics.
If we go down this route, there is no way other countries will not develop this technology. That is the main problem! A machine will only carry out it’s order to destroy, whether someone is there or not and are they the ones to kill or not!
Totally agree with a ban on autonomous weapon systems. We must ensure we keep a man in the loop.
The machines are already here, they can not be uninvented. The ability to make sure that these machines are used responsibly are with our respective governments and we all know how weapons of mass destruction can sway their decisions. I don’t want to appear defeatist on this article and I respect the Professors view but they will stay and get more powerful and dangerous. We as Human Beings never learn from the past and never lose the ability to wage war for the flimsiest of reasons.
Whie I completely agree with Noel Sharkey’s intentions, I do not see this being feasible until we make the value of human life much higher than it is today. I also think that until we have universal education that removes the prejudicial hatred engendered by narrow minded tribalism, we will not achieve the necessary valuation of human life. So a first step is to educate everyone. Not an easy task, but it may be the only option.
I completely support Prof Noel Sharkey with is proposal.
Semi automatic killer robots are already deployed in the form of drones.
These semi auto killer robots have proved to be indiscriminate even with a human operator, sat thousands of miles safely remote.
That’s right. You’d have us return to the days of massive artillery barrages, floating contact mines, carpet bombing.
We were able to reduce innocent lives being lost BECAUSE of intelligence designed into weapons and delivery systems.
It is naive to think banning intelligent and autonomous weapons would make life safer. In fact, only the bad guys would have the smart weapons. The genie’s already out of the bottle.
I agree with Sharkey that automated killing ought to be banned under international law.
But as John Lusby points out, this technology is likely to be developed regardless – no state can afford to stand still if it fears that its adversaries are working on this stuff.
I’d like to raise another worrying scenario, aside from interstate war and the ‘rise of the machines’ type rogue event.
That is a deliberate oppression or elimination of large swathes of humanity by an elite (human) cabal bent on domination.
Once the necessary functions of agriculture and manufacturing have been substantially automated, an elite group with access to this kind of technology may be tempted to break the age-old contract between ‘producing’ and ‘ruling’ classes of people.
One solution that springs to mind is to democritise the killer robots so that the knowledge of how to make and control them is shared by all. I have to admit this seems like a bit of a nightmare itself, like something the NRA in America would promote.
I’d be interested to know if anyone who shares my cynical outlook has any better solution?
Did I miss something here?
If, as Professor Starkey believes, it is we Engineers alone who have the capability to create these types of weapons, it is surely well within our skill set to add a simple ‘return’ option which would direct the weapon back against its launcher? This surely gives us the power to concentrate the minds of our leadership substantially. I have always believed that August 8th 1945 was indeed a turning point in warfare. Up until that moment, military and political leaders could happily? send millions to their death, certain that they themselves and their families would be safe ‘at home’: that option has been denied to the leadership -elected or not- of all nations since. Unless I am mistaken, and notwithstanding minor infractions (only involving minor countries still using limited power) we have happily not had WW3.
The idea of some large scale computer game, played out on screen, instead of for real has a certain appeal. Of course a lot of those whose livelihoods depend on the conflict, not its outcome would be out of work, but is that such a bad thing.
Let their skills? be re-directed to deal with natural rather than man-made disasters!
The reality is that autonomous machines are going to happen – for good or ill. The question for man-kind is how we choose to use them. There are obvious parallels with the nuclear research carried out in the first half of the 20th century – we could have chosen to develop it into a cheap, clean and reliable source of energy but instead devoted our time and efforts to an arms race.
Necessity is the mother of all invention. The Manhattan project (arguable) ended an incredibly destructive conflict and were we in the same position again, I’m sure we (Engineers, Scientists and Politicians) would do whatever was required – irrespective of any retrospective moral judgement. Surely that’s the nature of survival?
James comments about the Manhattan project caused me to recall some interesting past history. Colleagues may recall my long and strong link to Du Pont. Not many people know this, but the primary contractor for the majority of the processes used for Manhattan was du Pont.
Yes, the same people who made fibres and explosives (one of the original names of the firm is Du Pont Powder Co) The firm first really made its name (and close links with the US East Coast Establishment) making powder and shot for the Northern (Union) Armies in the Civil War. Interestingly, in 1940/41 when Roosevelt asked the CEO, W Carpenter to take on the task, he was actually CEO of General Motors as well! I have seen some of the original documents in the du Pont archive. The contract was for du Pont to be paid -according to standard accounting practice- for their staff’s work at an hourly rate, for any structures purchased and constructed to be re-imbursed at cost, and… $1.00! to represent profit! (I was 2 years old then and not really involved!)
My link continued strongly during my fiber and filament development times and it was my privilege about ten years ago to be the lead technical/management consultant reviewing the assets -real and trade mark good-will- of Du Pont when its fiber interests were bought by Koch Inc of Kansas. The deal was about £4.5 billion.
Readers may be interested to know that the original ‘thinking’ that started the whole ‘atomic’ effort resulted from a gathering -by invitation- of all the then appropriate scientists in about 1912 in the Hotel Metropol in Brussels: a gathering called by Solvay the then conglomerate that (primarily owned by the King of Belgium) had ‘raped’ its colonies -particularly the Congo for 100 years. Here, somewhat like Nobel were major technical and commercial enterprises almost dictating their interests to elected? governments and Establishments. I say elected advisedly: I remind myself that not a single ordinary soldier on any side in WW1 had a vote!
At the moment the link to the petition doesn’t appear to be working.
Does this not contravene the first and zeroth law of robotics? Suddenly ‘Terminator’ does not seem so far fetched…
Nice idea to ban autonomous weapons, but its very unlikely to occur for two very basic reasons:
1. War is war and in reality winning is the only rule.
2. Pressure from military and civilians to reduce the risk to their own fighters (use of atomic weapons in WW2).
With fully autonomous weapons systems there is a greater chance of conflict escalation in a very short time .With a human in the loop there is always the option not to go ahead .Likewise the development of RPV’s is to minimise the risk to the pilots ( amongst other reasons) .At the current level of technology a fully automated system would be best avoided at a later date who knows but should we go there ?
Whilst the Professor makes a good argument for the ban, will this stop those human beings intent on destruction and killing of innocents with the power to descriminate but the cowardice, mindset or arrogance to not? We have just seen in Boston and continue to do so around the world midless acts of violence of man against man or peole dying of starvation while others scrap overproduced food. I doubt such a ban would ever stop mans inhumanity to man.
War is no longer an international confrontation. It is a complicated and messy affair in which the weaker side, inevitably losing the main confrontation, will use the means available to influence events off the battle field. That is what 9-11, 7-7, Madrid, the Javan(?) nightclub, etc,were all about, let alone the uncounted slaughter of the weak and dispossessed in Iraq, Afghanistan and elsewhere.
Deploying robotics being run by people in suits at Langley will only make the streets of Manhatten, Knightdbridge and Boston more hazardous.
This is a complex issue that requires considerable thought. The use of unmanned vehicles is not a new idea, the Americans were using drones during the Vietnam era after all, and some of these were armed with air to ground missiles.
It’s not as simple and clear cut as right and wrong to me. there are instances where the action of a drone could be beneficial to target engagement that is time sensitive or critical, mainly due to their loiter time and potentially increased capacity to carry and manoeuvre with weapons.
that being said, there should always be a ‘man in the loop’ just like the various nuclear deterrents tha have gone on through the years. a completely autonomous system is open to abuse by its creator. “Oops, we didn’t mean to do that, but the drone just took over and decided…..”.
The man in the loop should be able to observe and monitor the entire operation of the drone and step in when and inappropriate response is likely to be taken by a drone.
As long as there is monitoring of the intelligence and this is no different to any other form of warfare, including a manned aircraft operation, where a pilot is asked to drop a bomb on a target in a population centre.
No one want to go to war (lunatic dictators aside), but the objective of warfare is often misunderstood. The aim is to win, and win as easily as possible. Casualties on either side are not the aim. Think of the cold war. Whilst there were a number of actual ‘wars’ that surrounded the bigger picture of the period, neither side went ‘HOT’ for exactly this reason.
So i guess at the end of it all, the policy to me should be ok, but only if the final decision is made by a man (or woman, let’s not be sexist) who can step in if the rules are being stretched. After all every service person has the right to disobey an order that they feel to be unlawful. And war does have rules.
A cautious agreement to ban these weapons as long as there is also a global binding ban on the use of human shields, roadside bombs, human bombs, other IED’s….get the point I hope
I find it hard to relate to this article. As a submission to ‘Engineer’ it holds no great technical or engineering incite and seems more of a call to arms (excuse the pun), or political request to the readership community.
If this is the way ‘the ENGINEER’ wishes to go, i.e. allowing any professor, acredited individual a free press for their personal beliefs, and to lobby parliaments then this reader will be unsubscribing.
I appreciate the engineering concept of such platforms (autonomous weapon systems) has morality and ethical issues but an Engineering article is not the place to express views or promote personal politics. Leave that to the forums and ‘Have Your Say’ area.
For many years there has been intrinsic ethical, doubtful debate about many engineering developments, (inability to breathe at speed – Stephensons Rocket, nuclear weapons develpment – Oppenhiemer, to name but two) but such development spin off usually more good than bad (Industrial revolution / Steam Transport, Nuclear Reactors and research).
It may be, and probably is the case, the spin off from these ‘skunk works’ developments will improve Air Safety. Only time will tell.
However, back to the point, I do not feel ‘theENGINEER’ is the correct location for individuals to canvas thier personal political opinion in what should be an informative factual publication. Leave that to the tabloids or Speakers Corner please.
I remind the advocates of a “Man In The Loop” that they are ignoring the many instances where a man chooses an ‘innocent’ target.
A “Man In The Loop” does not ensure that “International Humanitarian Law” is met.
“Killer Robots” may kill the ‘wrong’ target in error, but “Man” can kill the ‘wrong’ target by intent. Which will be worst?
How are we to control this ban. If say China or Iran choose to build then who is going to stop them. Just like the cluster bomb, we may not know who actually has been squirreling them away until they use it, it may then be to late.
Too late Professor.
As is demonstrated by man on a daily basis, words on paper are ALWAYS violated. The much larger question that comes to mind after reading this article and comments:
HOW will mankind fight the machines it has created once the machines have enough “intelligence” to be FULLY self-controlled?
This will happen. It is an unavoidable consequence of our “progress”.
Its time The Engineers editors
stepped back from the politics
and cosentrated on engineering
I am now begining to think this magazine has been highjacked by left wing activist
The Genie is out of the bottle….. Pandora’s box is truly open, etc etc……. Too late to ban these devices. That assumes a common global morality.
As an ‘arms race’ this is probably less worse than concentrating on nuclear or chemical weapons so lets leave it/them to evolve. Who knows, it just may be the ‘weapon to end all wars’ or at least drive some interesting spin-off technologies.
I also think it’s a reasonable assumption that fewer civilians would be killed by using these devices than by conventional ground warfare for the same objectives.
As for the ‘Engineer’ becoming too politicised. Yep, agree with that. Hard to determine if it’s left wing, but it does sometimes seem to lean that way.
So Ed, there you have it. Back to the non-political engineering stuff or do the ‘decent thing’……
There will be autonomous weapons whether or not engineers choose to ban them: someone, somewhere will make it happen.
For those doing the ‘what if…’ at Hiroshima and Nagasaki: if the weapon had not been used at the end of WWII, then it would have been used at some point later and the later the scarier.
Maybe it was better to use the relatively low yield devices of that time than those available as the technology was developed?
We have survived to this moment in humankind’s existence by sharing just 1 faculty, a conscience. Indeed the reason why wars (especially in modern times that have raged for years) have ended.
Our existence has now reached a precipice. AI will not be programmed with a functioning conscience in the foreseeable future.