Thursday, 28 August 2014
masthead+quote+image
Advanced search

Say no to killer robots

This is a call to engineers to stand up and demand the prohibition of autonomous lethal targeting by robots. I ask this of engineers because you are the ones who know just how limited machines can be when it comes to making judgments; judgments that only humans should make; judgments about who to kill and when to kill them.

Warfare is moving rapidly towards greater automation where hi-tech countries may fight wars without risk to their own forces. We have seen an exponential rise in the use of drones in Iraq and Afghanistan and for CIA targeted killings and signature strikes in Pakistan, Yemen and Somalia. Now more than 70 states have acquired or are developing military robotics technology.

The current drones are remote-controlled for use against low-tech communities in a permissive air space. With a delay time of 1.5 to 4 seconds from moving the joystick to motor response, air-to-air combat or avoiding anti-aircraft fire are problematic. Moreover, technologically sophisticated opponents would adopt counter strategies that could render drones useless by jamming communications.

Fully autonomous drones that seek and engage targets without communicating with an operator are not restricted by human G-force or response-time limitations allowing sharp turns and maneuvers at supersonic speeds. So taking the human out of the control loop has been flagged as important by all of the US military roadmaps since 2004. This would enable the US to lower the number of required personnel in the field, reduce costs, and decrease operational delays. But they also fundamentally reduce our humanity.

The UK company BAE systems will be testing its Taranis intercontinental autonomous combat aircraft in Australia this Spring. The US has been testing the fully autonomous subsonic Phantom Rayand the X-47b, due to appear on US aircraft carriers in the Pacific around 2019. Meanwhile, the Chinese Shenyang Aircraft Company is working on the Anjian supersonic unmanned fighter aircraft, the first drone designed for aerial dogfights.

BAE will be testing its autonomous Taranis UAV this year

BAE will be testing its autonomous Taranis UAV this year

The US HTV-2 program to develop armed hypersonic drones has tested the Falcon at 13,000 mph. The aim is to reach anywhere on the planet with an unmanned combat aircraft within 60 minutes. The hypersonic fully autonomous drones of the future would create very powerful weapons capable of making decisions well outside the speed of plausible human intervention.

A big problem is that autonomous weapons would not be able to comply with International Humanitarian Law (IHL) and other safeguards necessary to protect civilians in armed conflict. There are no computer systems capable of distinguishing civilians from combatants or making intuitive judgments about the appropriateness of killing in the way required by the Principle of Distinction. Machines of the future may be capable of some types of discrimination, but it is highly unlikely that they will have the judgment, reasoning capabilities or situational awareness that humans employ in making proportionality assessments. And accountability for mishaps or misuse is a major concern, as so many different groups would be involved in the production and deployment of autonomous weapons.

The US Department of Defense directive on “autonomy in weapons systems”(November 2012) that “once activated, can select and engage targets without further intervention by a human operator”, seeks to assures us that such weapons will be developed to comply with all applicable laws. But this cannot be guaranteed and it green lights the development of machines with combat autonomy.

Boeing's Phantom Ray

Boeing’s Phantom Ray

The US policy directive emphasizes verification and testing to minimize the probability of failures that could lead to unintended engagements or loss of control. The possible failures listed include “human error, human-machine interaction failures, malfunctions, communications degradation, software coding errors, enemy cyber attacks or infiltration into the industrial supply chain, jamming, spoofing, decoys, other enemy countermeasures or actions, or unanticipated situations on the battlefield”.

How can researchers possibly minimize the risk of unanticipated situations? Testing, verification and validation are stressed without acknowledging the virtual impossibility of validating that mobile autonomous weapons will “function as anticipated in realistic operational environments against adaptive adversaries”. The directive fails to recognize that proliferation means we are likely to encounter equal technology from other powers. And as we know, if two or more machines with unknown strategic algorithms meet, the outcome is unpredictable and could create unforeseeable harm for civilians.

The bottom line is that weapon systems should not be allowed to make decisions to select human targets and engage them with lethal force. We need to act now to stop the kill function from being automated. We have already prohibited chemical weapons, biological weapons, blinding lasers, cluster munitions and antipersonnel landmines.

We now need a new international treaty to pre-emptively ban fully autonomous weapons.

I call on you to sign our call for a ban at http://icrac.net/ before too many countries develop the technology and we venture down a path from which there is no return.

Noel Sharkey is professor of artificial intelligence and robots at the University of Sheffield


Readers' comments (25)

  • I absolutely agree with Professor Sharkey.

    As noted in a separate discussion quite recently, we are one step closer to "Sky Net”. Perhaps this is another step. If so, a ban on autonomous decision-making is insufficient. It seems to me that we are now at the point where we should be adopting Isaac Asimov’s three laws of robotics.

    Unsuitable or offensive? Report this comment

  • If we go down this route, there is no way other countries will not develop this technology. That is the main problem! A machine will only carry out it's order to destroy, whether someone is there or not and are they the ones to kill or not!

    Unsuitable or offensive? Report this comment

  • Totally agree with a ban on autonomous weapon systems. We must ensure we keep a man in the loop.

    Unsuitable or offensive? Report this comment

  • The machines are already here, they can not be uninvented. The ability to make sure that these machines are used responsibly are with our respective governments and we all know how weapons of mass destruction can sway their decisions. I don't want to appear defeatist on this article and I respect the Professors view but they will stay and get more powerful and dangerous. We as Human Beings never learn from the past and never lose the ability to wage war for the flimsiest of reasons.

    Unsuitable or offensive? Report this comment

  • Whie I completely agree with Noel Sharkey's intentions, I do not see this being feasible until we make the value of human life much higher than it is today. I also think that until we have universal education that removes the prejudicial hatred engendered by narrow minded tribalism, we will not achieve the necessary valuation of human life. So a first step is to educate everyone. Not an easy task, but it may be the only option.

    Unsuitable or offensive? Report this comment

  • I completely support Prof Noel Sharkey with his proposal.
    Semi automatic killer robots are already deployed in the form of drones.
    These semi auto killer robots have proved to be indiscriminate even with a human operator, sat thousands of miles safely remote.

    Unsuitable or offensive? Report this comment

  • That's right. You'd have us return to the days of massive artillery barrages, floating contact mines, carpet bombing.

    We were able to reduce innocent lives being lost BECAUSE of intelligence designed into weapons and delivery systems.

    It is naive to think banning intelligent and autonomous weapons would make life safer. In fact, only the bad guys would have the smart weapons. The genie's already out of the bottle.

    Unsuitable or offensive? Report this comment

  • Did I miss something here?
    If, as Professor Starkey believes, it is we Engineers alone who have the capability to create these types of weapons, it is surely well within our skill set to add a simple 'return' option which would direct the weapon back against its launcher? This surely gives us the power to concentrate the minds of our leadership substantially. I have always believed that August 8th 1945 was indeed a turning point in warfare. Up until that moment, military and political leaders could happily? send millions to their death, certain that they themselves and their families would be safe 'at home': that option has been denied to the leadership -elected or not- of all nations since. Unless I am mistaken, and notwithstanding minor infractions (only involving minor countries still using limited power) we have happily not had WW3.

    The idea of some large scale computer game, played out on screen, instead of for real has a certain appeal. Of course a lot of those whose livelihoods depend on the conflict, not its outcome would be out of work, but is that such a bad thing?
    Let their skills be re-directed to deal with natural rather than man-made disasters!

    Unsuitable or offensive? Report this comment

  • The reality is that autonomous machines are going to happen – for good or ill. The question for man-kind is how we choose to use them. There are obvious parallels with the nuclear research carried out in the first half of the 20th century – we could have chosen to develop it into a cheap, clean and reliable source of energy but instead devoted our time and efforts to an arms race.
    Necessity is the mother of all invention. The Manhattan project (arguable) ended an incredibly destructive conflict and were we in the same position again, I’m sure we (Engineers, Scientists and Politicians) would do whatever was required – irrespective of any retrospective moral judgement. Surely that’s the nature of survival?

    Unsuitable or offensive? Report this comment

  • James comments about the Manhattan project caused me to recall some interesting past history. Colleagues may recall my long and strong link to Du Pont. Not many people know this, but the primary contractor for the majority of the processes used for Manhattan was du Pont.
    Yes, the same people who made fibres and explosives (one of the original names of the firm is Du Pont Powder Co) The firm first really made its name (and close links with the US East Coast Establishment) making powder and shot for the Northern (Union) Armies in the Civil War. Interestingly, in 1940/41 when Roosevelt asked the CEO, W Carpenter to take on the task, he was actually CEO of General Motors as well! I have seen some of the original documents in the du Pont archive. The contract was for du Pont to be paid -according to standard accounting practice- for their staff's work at an hourly rate, for any structures purchased and constructed to be re-imbursed at cost, and... $1.00! to represent profit! (I was 2 years old then and not really involved!)

    My link continued strongly during my fiber and filament development times and it was my privilege about ten years ago to be the lead technical/management consultant reviewing the assets -real and trade mark good-will- of Du Pont when its fiber interests were bought by Koch Inc of Kansas. The deal was about £4.5 billion.

    Readers may be interested to know that the original 'thinking' that started the whole 'atomic' effort resulted from a gathering -by invitation- of all the then appropriate scientists in about 1912 in the Hotel Metropol in Brussels: a gathering called by Solvay the then conglomerate that (primarily owned by the King of Belgium) had 'raped' its colonies -particularly the Congo for 100 years. Here, somewhat like Nobel were major technical and commercial enterprises almost dictating their interests to elected? governments and Establishments. I say elected advisedly: I remind myself that not a single ordinary soldier on any side in WW1 had a vote!

    Unsuitable or offensive? Report this comment

View results 10 per page | 20 per page | 50 per page

Have your say

Mandatory
Mandatory
Mandatory
Mandatory

Related images

My saved stories (Empty)

You have no saved stories

Save this article