Thursday, 20 November 2014
Advanced search

Developing autonomous fighting machines

War machines: Accelerating the development of autonomous defence systems raises both ethical and technological questions

If you’re the sort of person who takes Hollywood blockbusters seriously, then defence research is an inherently risky thing. Advanced weaponry rarely works and it’s left to plucky mavericks to save the day. Most risky of all are autonomous robotic defence systems; according to the Terminator films, they’re bound to become self aware, decide that humanity is a threat and unleash a nuclear armageddon to wipe pesky Homo sapiens off the face of the planet.

20-22 FutureProtectedVehic.jpg

In reality, autonomous systems are firmly in place as an integral part of the armed forces’ arsenal. For instance, unmanned aerial vehicles (UAVs) often referred to as drones are regularly used to perform reconnaissance and attack missions, with BAE Systems’ Mantis among the systems carrying out observation and the General Atomics Predator capable of using missiles and other weaponry.

What’s more, the use of autonomous systems in defence looks set to increase in the coming years. The UK’s armed forces have asked for studies of numerous scenarios involving unmanned vehicles both on the ground and in the air, with the aim of removing humans, wherever possible, from hazardous situations.

On the defensive: the UK’s armed forces have commissioned a number of studies involving

On the defensive: the UK’s armed forces have commissioned a number of studies involving unmanned vehicles

In the UAV sector, there are several projects under development. Watchkeeper, a reconnaissance drone, is produced by Thales and Elbit. Capable of all-weather operation, it can carry day and night cameras and a surveillance radar system and can fly for 16 hours within a radius of 150km. It is due to enter service later this year.

Another project is being developed as a joint venture with France. Scheduled for deployment at the end of the decade, Scavenger will be able to perform surveillance and carry out attacks over land and sea. Even further off is Taranis, an offensive ’stealth UAV’ being developed by a consortium of BAE, GE Aviation, Rolls-Royce and Qinetiq.

Autonomous systems are also part of the projected ground forces of the future. BAE’s Future Protected Vehicle programme has three unmanned concept vehicles: Pointer, a small, tracked robot vehicle that can undertake missions normally done by a single soldier; Raider, a reconfigurable skirmishing vehicle; and Wraith, a ’stealth tank’. Other autonomous vehicles, such as unmanned supply lorries and mine-clearing vehicles, are already under development.

While many in the defence industry are encouraged by this research, others are voicing concerns over the implications of autonomous defence systems. Their deployment raises ethical issues, highlighted in a report from the Ministry of Defence’s Development, Concepts and Doctrine Centre (DCDC).

DCDC has produced a ’Joint Doctrine Note’ a report intended to guide the development of technology in line with the strategy and thinking that governs how it might be used covering unmanned aircraft systems, devoting a chapter to the legal, moral and ethical issues raised by unmanned fighting vehicles.

These issues, said Noel Sharkey, professor of artificial intelligence and robotics at the University of Sheffield, are intimately connected with the way that such systems will be controlled and the level of autonomy with which the vehicles will be equipped. ’Everyone talks about there being a human in the loop; somebody controlling the aspects of the system that have ethical ramifications, such as target selection and deployment of weapons,’ Sharkey told The Engineer. ’That’s become a mantra now. The problem is, I don’t really believe it.’

“You can’t put the Geneva Convention into a computer; it isn’t specified for it”

Prof Noel Sharkey, University of Sheffield

Sharkey’s concerns stem from the direction in which research into autonomous defence systems is heading away from single drones and towards a co-ordinated team or swarm of vehicles with a specified mission and location. ’There’s a shift from man in the loop to man on the loop, where you have a swarm of robots with one person under executive control. But one person can’t pay attention to more than one robot, so inevitably there will be more autonomy; the robots will be required to make more decisions.’

Control systems specialist Prof Nick Jennings of the University of Southampton, who works on projects involving autonomous systems communicating with each other, explained the ’man-in-/on-loop’ concept. ’The basic premise is you want the robots to worry about the basic mechanics of their co-ordination and whatever they’re doing on the battlefield and surveillance site on their own,’ he said. ’For example, in a reconnaissance role, a human would define the area the swarm would observe; humans are better at making decisions at a strategic level, but working out how best to cover that area in the most efficient way and where each element of the swarm is in relation to the other elements would be worked out by the UAVs themselves.’

When it comes to offensive missions with swarms of armed UAVs, Jennings agrees the situation becomes more tricky. ’My view is that you still want overall control by a human,’ he said. ’We don’t want UAVs selecting targets and working out how best to carry out an attack, but some enaction of instructions where the targets are selected by humans is an extension of what happens with advanced guided missiles. I think it’s acceptable that the drones in an armed swarm would work out between them which element would enact an attack order. A human provides the order and identifies the target, and which of the gun barrels actually fires could be decided autonomously.’

Hisham Awad, who is leading BAE’s Future Protected Vehicles programme, said that the problems come down to the human ability to process information. ’Information doesn’t necessarily make someone wiser. How much information can you give someone before they overload?

“I think it’s acceptable that drones in an armed swarm would work out between them which element would attack”

Prof Nick Jennings, University of Southampton

The critical thing is what the interface looks like. It’s pretty easy to do that when you’re controlling a single UAV or autonomous ground vehicle the operator sees exactly what the vehicle sees but as soon as you move to multiple vehicles, you need to select what information you display and it needs to be the information that is critical to the decisions the operator has to make.’

The increasing speed and power of UAVs is another concern for Sharkey. ’When you have a pilot, the aircraft is limited by the g-forces they can handle, but with a UAV the only thing you have to worry about is the structural integrity. You can go faster, turn quicker, do manoeuvres that would make a pilot black out. You could have a swarm of UAVs moving faster than human decision making. The human could still be in the loop, but the role would be reduced.’

“How much information can you give someone before they overload?”

Hisham Awad, BAE Systems Future Protected Vehicle Programme

This, he said, is a particular problem in the kind of warfare in which the UK and the US are currently engaged, involving relatively small skirmishes with small groups of insurgents rather than the large-pitched battles of the two World Wars. ’In a conventional battle situation, it would be perfectly legal to use autonomous weapons you can identify where the enemy is but with insurgent warfare, just identifying targets is a big problem. The Geneva Convention has a principle of distinction, which means you’ve got to be able to distinguish between combatants and non-combatants. Now, we have face-recognition systems, so if we know what someone looks like and we can hold them up for long enough for the system to work, we could do something. But would they work in the heat of battle, especially with high-speed UAVs? Certainly not at the moment.’

20-22 TFV-2.eps.jpg

Sharkey thinks that, in a practical sense, control will be via ’a kind of autonomy’. He said: ’It’ll move at very high speed and the operator will get snapshots with the occasional chance of veto.’

Within the defence industry, attention is turning to controlsystems for unmanned vehicles, but the focus is currently more on the operability of the system than matters such as target identification. ’The control systems for the Future Protected Vehicle concepts are the most important systems on board,’ said Awad. ’We’re concentrating heavily on non-line-of-sight communication and, even more critically, the navigational algorithms. What the vehicle looks like is immaterial to this; we’re working on vehicle-agnostic systems.’

The philosophy Awad is using is a multi-level control system with differing levels of intelligence. ’All of the vehicles would have the basic instruction set and we’d add to that depending on the complexity of the task. But as soon as you have machines working alongside people, that task becomes very different and very, very difficult.’

Unmanned vehicles tend to operate in a very controlled environment, Awad explained; for example, UAVs only fly in regions where they will not encounter any enemy aircraft. ’With ground systems, if there are any humans walking about, you need to know where they are. The big area for improvement is to make sure the vehicle navigation algorithms are very well tested and robust. At the moment, we’re working on collision avoidance and we’re only doing that with unarmed vehicles.’

When it comes to more complex missions, however, the concept of a robot running an artificial-intelligence system is, Sharkey thinks, unlikely. ’You need human reasoning behind it. You can’t put the Geneva Convention into a computer; it isn’t specified for a computer and it contains ambiguity and apparent contradictions that need human reasoning to make sense of. We have no idea how reasoning works or even where it comes from.’

But one of the biggest ethical questions is what the use of unmanned systems does to the concept of war. The DCDC report goes into detail about this using unmanned systems on the battlefield removes the element of risk to personnel. ’This raises a number of areas for debate, not least being the school of thought that suggests that for war to be moral (as opposed to just legal) it must link the killing of enemies with an element of self sacrifice, or at least risk to oneself,’ the report says.

This poses the risk of war becoming too easy. ’Here’s the big danger,’ said Sharkey. ’About 46 countries are developing robotic defence systems and there’s been no international discussion whatsoever. I can easily picture aerial robots dropping ground robots and using a few special forces to guide them, but that lowers the barriers for going to war. Nobody wants to see body bags, but if we don’t have the potential for body bags, we can go to war whenever and wherever we like.’

flying solo - birds of prey

Vulture II and SolarEagle are just two of the UAVs that are being developed for autonomous defenceAmong the applications being developed for autonomous defence is DARPA’s Vulture II programme, for which it has awarded an $89m (£54m) contract to Boeing. This involves developing a UAV that can stay on station for a long period, observing a location and reporting back to base. Boeing is developing a solar-electric craft called SolarEagle, which will be able to stay in the stratosphere for five years.

SolarEagle is a 400ft (122m)-wingspan craft with solar cells in its wings that generate electricity during the day and fuel cells to store the power for use during the night. ’It’s a daunting task, but Boeing has a highly reliable solar-electric design that will meet the challenge in order to perform persistent communications, intelligence, surveillance and reconnaissance missions from altitudes of more than 60,000ft [18.2km],’ said programme manager Pat O’Neil. First flights are planned for 2014, with a one-month duration as a first target.

SolarEagle would carry a variant of a video-capture system called Gorgon Stare. This is a spherical array of nine cameras, providing an extremely wide-angle view of an area as large as a city. SolarEagle would act as a long-term ’spy in the sky’, monitoring people and vehicles, with unusual patterns triggering more detailed reconnaissance or offensive action.

brain wave - unmanned territory

Systems inspired by the human mind could help machines to make battlefield decisions An intriguing DARPA programme aims to create electronic systems inspired by the human brain to assist unmanned systems to make decisions on the battlefield. Dubbed SYNAPSE (Systems of Neuromorphic Adaptive Plastic Scalable Electronics), the project is developing systems with integrated communication networks linking dense islands of processing devices, similar to the way that neurons and synapses are organised in the brain.

The project has already completed a first phase, where researchers built nanometre-scale components that could adapt the connection strength between two ’electronic neurons’. The next phase is more challenging to move this organisation into a larger-scale system and develop organisation between groups of synapses and neurons.

’So far, SYNAPSE has successfully demonstrated all the core hardware, architecture, simulation and evaluation capabilities needed for a new generation of intelligent electronic machines,’ said programme manager Todd Hylton. ’Now that all the building blocks are available, our next task is to start building functioning systems out of them.’


Readers' comments (10)

  • It seems as though we are close to developing a Terminator type device.
    The reality is probably very different as much of the speculation on autonomous systems is almost cetainly marketing spin.
    However we should remember the statements in the last paragraph and think long and hard about them.
    Because strangely to quote Steven Moffat "Demons Run when a Good Man goes to War"

    Unsuitable or offensive? Report this comment

  • This article does a disservice to the reader by confusing “remotely controlled” and “autonomous” military vehicles, especially when it says that such systems are already in use. Remotely controlled vehicles are indeed widely used today, autonomous vehicles are not. Autonomy is precisely the property of NOT being controlled by a human operator. Putting lethal weapons under the control of autonomous software, without human supervision, is an extremely dangerous proposition.

    Unsuitable or offensive? Report this comment

  • On the contrary, many of the unmanned air vehicles in use today are indeed autonomous for most of their operation, with only take-off and landing controlled remotely.

  • Yes UAVs can fly on autopilot. The worrisome bit is autonomous control of lethal weapons.

    Unsuitable or offensive? Report this comment

  • When the first war crime by an autonomous war machine is made public, who will be brought to trial for it? The commander in charge of the machine or the software engineer who wrote the decision algorithm?
    I say "made public" because Predator has already committed war crimes bombing a "suspect Talaban house" and then later re-bombing those who were searching the rubble for survivors! The US military hailed the success of their technology while glossing over the moral implications of killing people that where far from clearly identified as enemy combatants.

    Unsuitable or offensive? Report this comment

  • Currently I am unaware of any level of autonomy greater than that of an autopilot to eliminate the need for monitoring the platform in transit to or from the patrol area.
    Global Hawk and Predator both require a ground station for operators either in Theater or in the Home Nation. I am unaware of bombings without a man being in the loop.

    Unsuitable or offensive? Report this comment

  • See also "War Evolves With Drones, Some Tiny as Bugs": http://www.nytimes.com/2011/06/20/world/20drones.html

    Unsuitable or offensive? Report this comment

  • anyone who has seen BSG (or even War Games) already knows how this will end...

    Unsuitable or offensive? Report this comment

  • Why can we not think of eliminating all war machines? No one wins in a war. Only innocent suffer. Who decides to wage war? Leaders of two or more countires based on their perception and ego decides to wage war. Robots will destroy robots and taxpayers money meant for development is lost. Let international law enforcers listen to the arguments of the leaders and arbitrate to decide and the decissions are to be binding. Abolish all soldiers and war machines across the globe. Let only UN and ILO arbitrate and decide when there are conflicting thoughts in the minds of leaders. Let UN be the policemen for all nations. Use the fund generated by abolishing war machines for development. I dream of a world without war machines.

    Unsuitable or offensive? Report this comment

  • It's worth reading "Second Variety" by Philip K Dick: 60 years later we are on the way to realising his chilling vision.

    Unsuitable or offensive? Report this comment

  • There is nothing special about robotics. I have worked as a field engineer for the past thirty years. When automated machines work, all is peace and love. But just one glitch and a simple automated machine can become a killer.
    I realize that this backward step in our human evolution will manifest and the results will be savage and painful.
    if we would only exercise a small proportion of the energy that goes into these killers the world can be a better place. Just stop falling into the fear trap, that politicians want for our world.
    We are better than that !

    Unsuitable or offensive? Report this comment

Have your say

Mandatory
Mandatory
Mandatory
Mandatory

My saved stories (Empty)

You have no saved stories

Save this article