From the earliest spears to the development of the longbow and beyond, humans have designed their projectile weapons to maximise lethality whilst keeping as far away from the target as possible.

With this logic in mind, it seems inevitable that moves are afoot to create autonomous robotic weapons systems that can go about the terrifying and ugly of business of warfare whilst minimising risks to human combatants.
This seemed to be the case over in the US where a proposed upgrade to the Advanced Targeting and Lethality Automated System (or Atlas, which assists gunners on ground combat vehicles with aiming) would’ve seen the system “acquire, identify, and engage targets” faster.
As noted by The Engineer, the US Army then backtracked this initial announcement to emphasise the role of humans in the system and its adherence to directive 3000.09, a US Department of Defense set of guidelines ‘designed to minimise the probability and consequences of failures in autonomous and semi-autonomous weapon systems that could lead to unintended engagements’.

The military directive joins International Humanitarian Law and other safeguards brought about to protect civilians in armed conflict, which is discussed by Prof Noel Sharkey in an Engineer post titled ‘Say no to killer robots’ published in 2013.
Concerns are such that 26 national governments support the Campaign to Stop Killer Robots, which seeks an international treaty banning fully autonomous AI in lethal weapon systems for reasons that include a propensity for war if undertaken by machines.
Humans exist in a perpetual state of conflict, but does that excuse the development of ‘killer robots’? Not for 45 per cent of respondents to last week’s poll who believe there must be ‘a human finger on every trigger’. A total of 13 per cent of respondents agree – like the Chinese government – that their use should be banned but R&D should continue. Just over a fifth (21 per cent) said there should be a blanket ban and the same number of respondents opted for none of the above.
What do you think? Join the debate in Comments below, but familiarise yourself with our guidelines for the content of comments before submitting.
Haven’t we been here before, I seem to recall a European power in the 1930’s that tried to automate mass killings of people, have we learned nothing ?
The US could do a lot of good in the world if it was to spend some of the over $700 Billion it gives each year to the MIC to causes that help the people on this planet rather than finding ‘innovative’ ways to murder them.
Naturally, China is spending nothing on this or any other weapons system , not a penny.
All those extra war ships, rail guns etc were found in a Christmas cracker.
Germany, (lets not mince our words since it seems WW2 was fought by NAZI land , a fictitious realm that had NAZI’s in it but no GERMANS. ) had a standing army of 2.1 million highly trained men at the start of WW2 and was well equipped as well. Pretending others can be trusted not to use their very large armed forces against you has proved very poor judgement.
This sounds very much like the league of nations trying to ban the bomber aircraft in the 1930s. You cannot un-invent something, so must try to have effective controls, but this still doesn’t allow for ‘rogue’ nations.
But surely we are missing a trick here! Every army, navy, air-force in the world tries to make its ‘human’ element completely ‘automatic’ ie attack and kill whoever we place opposed to you! without thought. So what’s the difference with programming a computer to do the same!
A person can and do refuse unjust commands, a machine will carry out those commands without hesitation. If robots are to be used in warfare, there must be someone who can take responsibility for their actions.
Unfortunately not everyone being commanded has the insight, wider contextual awareness or ability to refuse. In such situations obedience is often on the pain of death. Once the genie is out the bag (it already is) there is no getting it back in again which depends on miltary enforcement and what does that bring?
Autonomous Weapons Systems breed Autonomous Defence systems. Perhaps we can take Human Casualties out of the equation and settle conflicts by some type of Robotic Chess.
Any ban would have to overcome two hurdles: jurisdiction and enforcement. The five permanent members of the United Nations Security Council (China, France, Russia, UK and US) that are (arguably) the five countries most likely to want to deploy these weapons can unilaterally veto any resolution attempting to impose a ban …
Even assuming the UN could get a resolution passed, economic sanctions are relatively weak and the only really effective method of enforcement at a state level is the threat of military force, not to be entered into lightly given the offending country now has an army of ‘killer robots’!
we are all tax robots, in the army we are programmed to kill on command, what is new about the robot you are talking about ? It kills longer, is that the problem? It is not corrupt, is that an issue? Shouldnt be the killing itself be the discussion, or does that spoil the fun? What robot is making this articles, it shows a bit of bad programming 😉
I think that we should use robot’s to fight our wars.
No human life would be killed.
Money would be spent on Engineering to research and manufacture the best robot possible.
Craig Charles would be made up. Win win situation.
Global robot wars is what we need. 🙂
Well, if it pleases Craig Charles…
I should probably point out the Noel Sharkey is a current Robot Wars judge and he’s a leading campaigner against killer robots.
Suppose you are at sea, and there is an inbound cruise missile aimed at the con. You want a system that can react within milliseconds to a recognized threat, and take it reliably out, rather than waiting on a human operator to respond, be on battle station, arm, and trigger the weapon.
Too slow, you and everyone else aboard is now dead.
Human intervention will be unreliable. Unfortunately the best way of understanding this technology and devising systems against them is to continue with research. Just because they may be banned does not stop someone from inventing and developing them. And it is becoming cheaper – just buy a drone, programme its flight and get it to drop a small bomb, all for a few thousand pounds. And as we are seeing in the Midddle East how do you defend against peple who believe that death in battle is the quickest way to heaven or valhalla and don’t obey the rules anyway?
So at the same time China starts to take its rightful place as the world’s premier AI super-power, Westerners want to convince it to give up its advantage?
Good luck with that…
Rather than ban autonomous weapons why not just ban war? Anyone who thinks either will happen is living in a dream.
If only those cave men had banned wooden clubs.
You have to develop it to keep pace but if you push development too hard and frighten your enemies they will spend more to keep up with you. So I personally think it should be done as quietly as possible and shouldn’t be bragged about so that it’s there if needed and yet it doesn’t create too much pressure on the opponent to have an answer. The huge danger really is that autonomous technology is in lots of consumer products right now which lets anyone, including terrorists and opponents cobble together smart weapons. That is a serious problem.
All forms of war lead to escalation, these multi million $ weapons will be met by cheap petrol cans, ieds, and RPGs, budgets will need to be increased and so on it goes.
This debate is interesting and needed. At this stage I feel that the responsibility for the action taken should have a human input and have human responsibility
As the discussion has moved on to war in general I thought I’d mention undeservedly little-known Lewis Fry Richardson (Quaker, statistician, meteorologist and pacifist) who attempted, in his “Statistics of Deadly Quarrels” to analyse the causes of war … well worth reading – at least in summary, although his conclusions are sombre
q.v. https://www.stat.berkeley.edu/~aldous/157/Papers/hayes.pdf
I worry about the answer in the poll,”There must be a human finger on every trigger”.
There is on UAV missions but if the person doesn’t know the context of that kill they are being asked to make this completely negates the fact of having that human there in the first place.
Let me put it another way.
If somebody tells the operator of the UAV to shoot he will simple as that. As far as he is concerned that is a legitimate target without any question.
All of this technology is quite worrying it reminds me of the classic Jurassic park quote,”You were so preoccupied with whether we could do this, you didn’t stop to think whether they should.”
An alternative view. Robots don’t go wild and commit war crimes because they saw their mates blown up by an IED the day before. Also, you can define rules of engagement that order a robot to sacrifice itself if it is not sure. You can’t ask a soldier to do nothing if he’s not sure if the woman who ignored the call to stop walking towards him is pregnant or has a bomb under her dress.
Maybe a system that can be given explicit and precise ethical rules is morally more responsible than a wilful, self-interested human with a lethal weapon?
A wise man once stated: “Build this system, and our enemies will go bankrupt attempting to keep up, or else they will become our friends.” Which is more dangerous, friends or enemies?
In that scenario it a defensive system against an unmanned missile. I don’t see anyone objecting much to that. It’s a very different scenario to allow AI/robots to make their own decision to use lethal force.
Old men have been telling young men just that in every society since records began! That was to keep themselves (the old men) in power and retaining access to the tax base base and revenue?
The difference this time is that its not secular but religious, but the lie is just as potent?