Monday, 22 December 2014
Advanced search

Autonomous machines prompt debate

Legislators and opinion-formers need to start thinking about how autonomous machines like driverless trucks, surgical robots and smart homes that keep an eye on their occupants could affect society, according to the Royal Academy of Engineering.

In a new report, the Academy points out that the technology to develop such systems is either already available or closer to reality than many people think — and the legal system needs to catch up fast.

‘We’re very used to automatic systems, such as the braking assistance technology now standard in most cars’, said Prof Will Stewart of Southampton University, one of the contributors to the report. ‘But traditionally, engineers have designed these things so that they’re used with a human operator. As we move towards autonomous systems, we’re taking the human further and further away from the machine.’

The report’s authors looked at two particular types of system — autonomous transport, which they believe is maybe ten years off and most likely to first be used on heavy lorries; and smart homes, particularly in reference to the elderly, who could benefit from health monitoring systems and even devices to provide ‘companionship’, such as robotic pets.

‘We expect to see a new generation of systems that will become tools that are in some respects almost like people, but will also pose some of the same ethical and management issues as people do,’ Stewart said. ‘We expect great benefits — but also some new attitudes to our creations.’

Autonomous trucks are a good example; as Lambert Dopping-Hepenstal, a member of the Academy’s engineering ethics working group and BAE Systems’ Military Air Solutions’ science and technology director pointed out, autonomous vehicles already operate in mines and warehouses. Such trucks would use lasers and radar to monitor their surroundings and neighbouring cars, and would have the Highway Code programmed into them.

‘They’d be much more predictable than trucks driven by humans; they wouldn’t pull out suddenly, they would always pull in if there was a problem; they’d give way where they were supposed to,’ Dopping-Hepenstal said. ‘But also, there are bound to be problems. If there’s an accident involving one of these things, who’s responsible? The system's engineer? The manufacturer?’

One problem, explained Chris Elliott, a consultant engineer and barrister who also contributed to the report, is that the legal framework isn’t set up to deal with this sort of situation.

‘The law is built around cause and effect, but it’s bad at assessing systems, where each individual part is harmless but the whole might be harmful,’ he said. ‘It’s still in the age of automation, where the role of a human operator is well-defined.’

The legal and ethical systems have to catch up, he added, and the engineers developing these systems need guidance on how their machines might be licensed and approved.

Elliott is particularly concerned about a possible ‘yuck factor’ that might hinder public acceptance of autonomous systems. Heavy trucks and robot surgeons inherently carry some risk, and the prospect of one of these systems making a mistake and killing someone has to be discussed before they are developed.

Smart homes also present an ethical problem: systems which monitor an elderly person, watching for activity when they normally wake up; checking whether they take regular medication; and even monitoring vital signs, would doubtlessly reduce the risk of their condition deteriorating, said Dopping-Hepenstal.

‘But there are also questions of privacy, and whether that’s too much observation. We need to know what people are comfortable with. That’s a big issue now, and it’s going to get even bigger.’

Stuart Nathan


Readers' comments (1)

  • The question about smart homes goes further than the comment in the article. We are developing smart home systems for people with dementia that use a voice prompt to remind people what to do, or to prompt them that an activity might be inappropriate, based upon received sensor data. We've worked to a very strictly defined ethical and consensual framework, but how are products like these to be implemented? The potential for abuse is great - keeping granny quiet and out of trouble, but we have shown that the benefits can be great too - examples include less wandering, reduced incontinence and improved sleep. We want many people to be able to benefit from this technology, but should it be clinically supervised, or can we trust families and carers to use it responsibly for the duration of its installation in a person's home?

    We need to think about questions like these in our context - autonomous systems working with vulnerable people, but also in contexts such as automatic car guidance. Link the car guidance agent to my satnav and the car can be programmed to take me anywhere, perhaps even if I don't want to go there.

    These issues should not mean that we don't develop this technology, but rather that we work out how we are going to integrate autonomous technologies into the lives we need while still maintaining and protecting /our/ autonomy and our dignity.

    Tim Adlam
    Bath Institute of Medical Engineering

    Unsuitable or offensive? Report this comment

Have your say

Mandatory
Mandatory
Mandatory
Mandatory

My saved stories (Empty)

You have no saved stories

Save this article