Last week, we asked readers what approach should be taken when integrating widespread autonomy into the UK road network.

The poll was set in response to a report that suggested autonomous vehicles be introduced to roads, even if they are only marginally safer than humans.
According to a report from the RAND Corporation, thousands of lives could be saved over approximately 15 years if autonomous technology was widely introduced when it is just 10 per cent better than human drivers. Over a longer period, hundreds of thousands of lives could be saved over 30 years compared to waiting until the technology is at least 75 per cent better.
Report co-author Nidhi Kalra said: “If we wait until these vehicles are nearly perfect, our research suggests the cost will be many thousands of needless vehicle crash deaths caused by human mistakes. It’s the very definition of perfect being the enemy of good.”
The early introduction of vehicle autonomy would not, however, eliminate road deaths, and society may not accept the technology causing casualties, even if overall harm was reduced.
In light of this, we asked what approach should be taken when integrating autonomy into the UK road network.

Just over a fifth (21 per cent) of respondents took the RAND Corporation view that autonomous vehicles be introduced when they are slightly better than humans, and 12 per cent said they should never be allowed on the roads.
The poll garnered 473 votes and of the remaining 67 per cent, 34 per cent took the view that autonomous vehicles be introduced when they are close to flawless, and 33 per cent agreed that they should be introduced when significantly better than humans.
In the debate that followed, suggestions were made about autonomous vehicles being made available for bad drivers, or for those motorists whose driving is impaired.
Mr Tillier said: ‘Autonomy needs to be used as soon as possible for the drivers who cannot operate the turn signals, who undertake at roundabouts, who exit roundabouts from straight the inside lane, who are not respecting safety distances, who are not respecting limits. The other drivers, who are driving properly or who show care, can remain on current cars.’
This view was echoed by RDutton, who said: ‘Roll-out should be accelerated for drivers that may be less safe than average, such as, the elderly, alcoholics, and drug users.’
Technology and insurance conundrums were also raised, with Steve asking: ‘In the scenario of a child running in front of a fast moving car, where the ‘software’ has to decide to either save the child or save the passenger, who is liable for the consequences of that decision?’
Addressing a number of factors around vehicle autonomy, Edward said: ‘Autonomous vehicles will arrive on our roads at some point but it is a complex issue which has some fairly serious consequences if done wrongly.
‘Firstly, autonomous cars need to be proven to operate safely. Currently there is a lot of testing underway but safe operation has not yet been verified. Indeed, there have been some fairly serious failures in tests so there is clearly some way to go yet.
‘Secondly, once introduced there will be a long period when there is a mix of autonomous and conventional vehicles on the road. A big factor which needs to be understood (it currently isn’t) is how drivers will react when sharing the roads with autonomous vehicles.
‘Thirdly, there will need to be rulings regarding who is responsible when the inevitable accident involving an autonomous vehicle takes place. Is it the owner, the manufacturer or the software company who bears the responsibility for the autonomous vehicle? Car insurance is mandatory for very good reasons and will be required for autonomous vehicles but where the liability sits will be an interesting discussion.’
What do you think? Let us know in Comments below.
Only when close to flawless. We wouldn’t introduce any other new technology unless it was safe: something marginally less unsafe is still unsafe.
Would be great on the M6 and M1 as we sit, bored-stiff, nose to tail….
My real preference is for all other cars to have these systems fitted, as I seem to be the only sane person on the UK roads.
Within the average performance of people is a mix of many safer-than-average drivers, and some worse-than-average often with specific reasons for that. Autonomy needs to do better than a healthy well-trained sensible driver.
And it will need decent proof of success because you can be sure that the media will be all over the crashes, especially if serious.
I would be strongly in favour of autonomous vehicles if it could be proven that their use would result in a significant reduction in accidents (fatal or not).
On the other hand, having recently seen in article that shows even a one pixel change in an image can fool AI into mistaking a dog for a stealth bomber, (http://www.bbc.com/news/technology-41845878) I would need very strong assurance that the AI used for autonomous vehicles is indeed not so easily fooled.
I am confident, however, that developments in AI will resolve the vast majority of such issues in the time between now and when truly autonomous (level 5) vehicles hit the streets.
Perhaps the greatest danger is the transition to level 4 when ‘drivers’ might fail to take control in situations when the robotics ‘opt out’.
Autonomy needs to be used as soon as possible for the drivers: who cannot operate the turn signals, who undertake at roundabouts, who exit roundabouts from straight the inside lane, who are not respecting safety distances, who are not respecting limits.
The other drivers, who are driving properly or who show care, can remain on current cars.
Whilst there may be a utilitarian argument in favour of fully automated cars, car makers cannot change the people who buy their products.
The development will be for nothing if nobody wants to buy one of these things. One suspects that automation will be more likely to take the form of enhanced driver support. Until a hack free computer is invented there will be no entirely safe driver-less car in any case.
This year, next year, sometime, never.
I have to presume that the MOST important parameters about decisions as to whom must, should, might, could, benefit from this technology will be the insurance ‘industry?’ -and the related ‘ambulance chasing’ lawyers who participate in this scam. Conflict or outcome?
There it is again! Presumably their ‘deeply mined data’ on accident profiles can define those most likely to be affected: and their pockets are not bottomless: nor are they benevolent societies. “Follow the money” was a piece of advice i was given in the 70s . ie who will gain, lose, stay the same?
This just shows how people block innovation by having standards that are much higher for the new thing than they have for what exists already. A micron better than a human is still better and still safer and if it were that simple we should jump at the chance to reduce car accidents as soon as possible.
Roll-out should be accelerated for drivers that may be less safe than average, such as, the elderly; alcoholics and drug users.
Autonomous vehicles will arrive on our roads at some point but it is a complex issue which has some fairly serious consequences if done wrongly.
Firstly, autonomous cars need to be proven to operate safely. Currently there is a lot of testing underway but safe operation has not yet been verified. Indeed, there have been some fairly serious failures in tests so there is clearly some way to go yet.
Secondly, once introduced there will be a long period when there is a mix of autonomous and conventional vehicles on the road. A big factor which needs to be understood (it currently isn’t) is how drivers will react when sharing the roads with autonomous vehicles.
Thirdly, there will need to be rulings regarding who is responsible when the inevitable accident involving an autonomous vehicle takes place. Is it the owner, the manufacturer or the software company who bears the responsibility for the autonomous vehicle? Car insurance is mandatory for very good reasons and will be required for autonomous vehicles but where the liability sits will be an interesting discussion.
Good points.
In the scenario of a child running in front of a fast moving car, where the ‘software’ has to decide to either save the child or save the passenger, who is liable for the consequences of that decision ?
Perhaps if mandatory biennial driving tests were introduced for all (HGVs, buses, taxis, motorbikes) drivers including the use of simulators and actual driving examination up to advanced driving level standards would make the case for autonomous cars redundant. Some more vigorous police enforcement of traffic laws might also help. I assume at present they are using stealth techniques.
Thinking forward past the introduction and acceptance phase, at what point do I lose the right to drive my vehicle myself?
Very interesting question. Should there be consequences for drivers who cause accidents that could have been avoided by engaging their cars’ autonomous systems?
If we apply the same logic to an autonomous aeroplane, how many people would be prepared to fly knowing the plane was a bit safer than with a human pilot ? My money is on humans even with all their faults.
In that case, I’d recommend not flying as many commercial flights these days are using automated takeoff and landing, as well as the autopilot in between…
Could the auto pilot system currently used not be classed as autonomous?
They should be able to correct bad driving and take over in any bad situation – this is not a human thing -it is automaton–
The problem as I see it, Mr Green, is how do you determine when a ‘bad situation’ has occurred?
The classic case of the autonomous vehicle that failed to see an 18-wheeler coming straight at it, because it wasn’t programmed to look far enough in that direction (sideways). Personally, had I been in that vehicle, I would have liked the ability to override the automation, because I’m sure I had a better grasp of the situation.
Sometimes, diverting your car from the normal path (to avoid a worse situation) is the ‘best’ solution, but it requires both a value and moral judgement that I cannot envisage an automatic system would ever have!