The fatal collision that caused the death of a pedestrian in Tempe, Arizona on March 18, 2018 has led some in the US to call for a moratorium autonomous vehicle testing.

Elaine Hertzberg was struck by a driverless vehicle being tested by taxi hailing app company Uber. Hertzberg was crossing a road away from a designated and illuminated pedestrian crossing at around 10pm when the collision occurred with a Volvo XC90 SUV that had a human observer behind the wheel. The vehicle was travelling at 38mph in a 35mph zone and did not attempt to brake.
The US Consumer Watchdog organisation called for a national moratorium on autonomous vehicle testing, saying that “there should be a national moratorium on all robot car testing on public roads until the complete details of this tragedy are made public and are analysed by outside experts.”
Following the accident, Uber suspended autonomous vehicle trials in all North American cities while an investigation takes place. The company had been testing in Pittsburgh since 2016 and was also carrying out trials in San Francisco, Toronto and the Phoenix area, which includes Tempe. Arizona Governor Doug Ducey has since suspended Uber’s autonomous vehicle testing program.

The tragedy prompted The Engineer to ask how industry should respond, with nearly half (47 per cent) of the 903 respondents agreeing that testing should continue, followed by 29 per cent who think there should be a moratorium on autonomous vehicle testing. Of the remaining vote, 14 per cent agree that there should be a suspension of the technology used, and 10 per cent chose ‘none of the above’.
To date, 86 comments have been published in response to the poll, with opinion largely polarised between those who are pro-and-anti autonomous vehicles, and those questioning the Volvo’s speed when the collision occurred.
Michael Morley wrote in to say: “Driverless cars will happen one day, but there needs to be a considerable improvement in AI technologies before these cars exceed the safety record of the very best human drivers. I see a future in which automated driving is commonplace – where the current situation of 1.3 million deaths a year in road accidents becomes an outlandish nightmare.”
“I think it only fair that we wait until all the facts are known,” added Duncan Fallow. “Whilst the fatality is tragic would this incident have garnered the press coverage if the vehicle was being driven solely by a human? CNBC estimated that there were around 40,100 traffic related deaths in the USA in 2017 – nearly 110 a day. Autonomous vehicles offer the opportunity to reduce these casualties significantly by taking the ‘human element’ out of the equation.”
Richard Grey asked: “What is the point of driverless cars?” whilst Ekij queried how the vehicle was travelling at doing 38mph in a 35mph zone. “Humans creep over the speed limit sometimes because we’re distracted or miss a speed-limit sign, but an autonomous vehicle has no such excuses.”
What do you think? Let us know via Comments below.
I voted “None of the above” because we should wait until the facts are known, ideally after an impartial accident investigation.
..isn’t that what a MORATORIUM is?…i.e. Option 1.
No. Other designs have not shown a problem so there is a much lower chance that they have such a fault. It seems very strange that the system would exceed the speed limit and I would not expect any other system to allow that. If it just didn’t detect the pedestrian then it might indicate that the designer has not selected suitable sensors.
Other designs have not failed and I hope they have been tested under adverse conditions and passed those tests.
I agree
I think we should suspend autonomous public driving with this particular design of vehicles until we are convinced that the vehicle “did its best” under the circumstances. Surely darkness cannot excuse the design in what sounds like a simple scenario. It probably wasnt that simple but we should not allow that design back on the road until we prove it.
Why was this vehicle doing 38 in a 35 zone?
Humans creep over the speed limit sometimes because we’re distracted or miss a speed-limit sign but an autonomous vehicle has no such excuses.
We only have the mayors word on it that it was – maybe she is shooting from the hip?
See: https://www.bloomberg.com/news/articles/2018-03-20/video-shows-woman-stepped-suddenly-in-front-of-self-driving-uber this article states that the vehicle was coming into a 40 mph zone.
Any death is regrettable but we need to put it into perspective ~ How many similar serious injuries / deaths occurred with driver controlled vehicles over the last month?
But what is the ratio of driver controlled cars to deaths compared to autonomous controlled cars to deaths?
Look: when people star to use statistics instead of common sense, it is an invitation for disaster. The late (and Great) Richard Feynman sustained a strong talk with the members of the Rogers comission that investigates the Space Shttle Challenger Disaster about the use of statistics.
He also cast a famous phrase: “For a successful technology, reality must take precedence over public relations, for nature cannot be fooled.”
What is the point of these driverless vehicles? We can call all learn to drive & pass a test. This technology is being done because we can, not because it’s useful. A system once set up would be the perfect target for terrorist organisations, it may take them years to hack in, but you would only need to do this once to cause massive problems & or death.
Driver assistance is a great way forward, but the driverless car is good money wasted. Spend it making the roads nice to drive on & safe.
At 84 I can see my licence being withdrawn when I am longer safe. Autonomous cars are a way for people like me to keep our independence. Please let them be available before I have to give up driving because of health & safety issues.
The point Andy, is; this is the first step into the realm of removing vehicles from personal ownership.
With fully-autonomous vehicles there will be no need to own your own, you can just call one for when you need it. As you pointed-out, the system will be a permanent target, but it will also be fully, and independently, controllable via a”central command”.
This is the second “driverless car” fatality.
The first one was after 14 million miles, which is less safe that human drivers.
There are a lot of driverless cars out there. How do they compare to human drivers now?
Unfortunately, with motorised traffic (autonomous or not) deaths are always going to occur. Sad as it is, people have to realise that this technology can never be 100% safe. If they cannot accept that then the only option is to discontinue autonomous vehicle technology. Maybe, it’s time for a public referendum on these vehicles – so far it seems that the motoring lobby have been leading.
However, I do note two basic failures in this case that shouldn’t have happened: 1) the car was travelling at 38mph in a 35mph zone; and, 2) the car did not attempt to brake. To my mind these two points represent an avoidable technology failure which indicates that maybe this particular implementation shouldn’t have been allowed on public roads !
More testing is vital
It will be interesting to see who ends up in court for manslaughter (or what ever it is called in America), the “driver”, Uber, Volvo or just the mirco chip?
The woman was also in the wrong, I believe crossing a road at a non designated crossing point in the US is ‘jaywalking’, and is considered a misdemeanour. it is regrettable that the outcome ended in her death, but was she not looking! I may seem harsh, I don’t mean to be just expanding on Iain’s
musings.
How do the legalities of a car accident work in this scenario? Do you think uber will get a call from an ambulance chaser?
The notion of a driverless vehicle should be abandoned altogether. There are far too many variables and situations that keep on evolving, for any technology to handle safely enough. Automobile autonomy should only be considered in two situations: On the motorway, when the driver usually does not have to do much anyway (especially at night), and in heavy traffic when it gets very boring for the driver.
“…tragedies… are common on the roads..” is not a valid argument. When I started civil engineering in 1974 we killed (several?) hundred people a year and were only exceeded by the fishing and mining industries for our appalling safety record. Now a death is headline news and non fatal injuries are well down too. The improvement still has places to go, but this is what you do.
The only real justification for driver-less tech is to eliminate the current litany of tragedy. So find out why it failed, modify and then re-start trials away from vulnerable bystanders like pedestrians. It won’t, and can’t, be done quickly, but patience brings its own reward.
With the best will in the world these systems are not going to be faultless in the first instance – if ever. The question is are they better than the present system.
Suspend, investigate, learn, rectify and resume testing would be my advice. The resumed testing regime should be dictated by the flaws found in the software or perhaps hardware.
Autonomous vehicles are probably the safest way forward in the light of the current toll on the roads.
Well said! Surely that should have been in the vote options.
I honestly never thought of phrasing it that way.
Should we not wait until the outcome of the investigation into this incident is published. There are many questions to be answered, such as.
Did the human operator have time and opportunity to intervene or override the automated systems?
Did any possible intervention by the human operator exacerbate the situation.
What caused the car and pedestrian to be on a collision course (did the pedestrian step into the road without looking and too close to the vehicle for it to possibly stop in time)?
Without these and many more answers we cannot make a truly informed decision on the future of testing.
Unfortunately we do not “see” Software Engineering as “engineering” and thus impose less stringent controls. As with Apollo 11, having a human able to take control was essential to a safe return. What I do not understand, and why I did not respond to any of the 4 questions in the survey, is what motivates driverless vehicles on public roads? We constantly complain about a lack of work, especially for millennials, while at the same time seem eager to fully automate life! If there is no choice, is it still “life”, or simply mere “existence”?
My opinion : Nobody can stop progress , it can only be slowed-but not stopped !
https://interestingengineering.com/driverless-buses-could-start-ferrying-passengers-to-their-planes-at-gatwick-airport?utm_medium=ppc&utm_source=onesignal&utm_campaign=onesignalpush
1) Testing should still be restricted to non-populated areas, the technology isn’t good enough for anything more than walking pace yet, and maybe it never will be.
2) We don’t need this technology anyway.
I think that testing should be suspended until a good enough understanding of the circumstances of the fatality are fully available. I have had the misfortune of someone running out in front of me and therefore being knocked down, fortunately the person only had a broken foot – even though their head smashed my windscreen – no head injuries. At the time, I did manage to hit the brakes and slow, but was too close to be able to stop before colliding with the person. Evasive action was out of the question due to close proximity of vehicles to either side of me. The person admitted their error, but I still had to pay a considerable fine.
It is of concern that the vehicle’s sensors did not ‘notice’ the pedestrian in the fatal accident, based on the information in the article.
Driverless cars will happen one day but there needs to be a considerable improvement in AI technologies before these cars exceed the safety record of the very best human drivers. I see a future in which automated driving is commonplace – where the current situation of 1.3 million deaths a year in road accidents becomes an outlandish nightmare.
This is a tragic and totally avoidable death of a pedestrian. There are questions to be answered, and system modifications made, before (if ever) ‘driverless’ car are ever allowed on public roads again.
1) Why was the car exceeding the speed limit, and why did the ‘observer’ allow this? (OK, 38mph in a 35mph zone would not normally result in a speeding conviction in the UK, but what is the situation in the US?).
2) More worrying is the fact that the car did not react to the pedestrian, and did not attempt to avoid the collision. This is an obvious failure in the autonomous system, that must be investigated fully, remedied and proven off the public highway, before ‘live’ testing can continue.
3) A court case will obviously be needed in the US to apportion blame for this tragic death. It will be very interesting to see who is going to ‘pay heavily’ for the accident — Uber, the ‘observer’, or Volvo ? And will insurance companies learn a lesson ?
Because we can should not translate into we should. Echoing an earlier poster, I ask what is the actual goal of this development?
Are we running short of taxi drivers? Are we almost out of lorry drivers or bus drivers? Maybe the people who drive today, having spent real time and money learning the skills to do so are saying they won’t drive any longer.
My guess is none of these things are happening, so why are technology companies so driven (no pun intended) to develop driverless vehicles?
Some of the replies above question the need for driverless cars, while I know there may be economic issues, the primary one is to reduce the loss of life in vehicle incidents, as most incident are due to the limitations of the people driving the vehicles.
Trials like this require regulation, and in this instance there was an “observer” in the vehicle, is that not the same as having a driver?
Would it be acceptable to continue to use an industrial machine if it had been involved in a fatal accident? I very much doubt it. If any machine causes a fatal accident it should be taken out of operation until it can be verified that it was not the primary cause of the accident. The main reason for developing autonomous/driverless cars is that they should be safer than a human driver. This incident shows that the technology is still far from ready to be allowed on our roads. The human ‘observer’ of this vehicle should have been aware of pedestrians and other possible dangers and should have been ready to take action, especially as he knew he was in charge of a test vehicle. He has no excuse.
I sincerely hope that that both the driver and UBER are prosecuted to the fullest extent possible for this tragedy.
Any vehicle being tested on the roads should be clearly marked and have warning lights to make other road users and pedestrians more aware of their presence.
We are at the begining of a technological revolution. In the early industrial reveolution it was not unheard of for a steam boiler to explode with fatalities and considerable damage to its surroundings. With deeper knowledge engineers worked out how to design, manufacture and operate pressurised equipment in a safe manner. If continued testing and development were not allowed the last century would be quite different. People assume thay are better drivers than they really are, we should consider safer alternatives even if giving up personal control feels like a loss.
While the question of “why” was the car exceeding the posted limit (by less than 10%) is a valid one, the point that the car did not attempt to brake makes the point moot. That additional 3mph probably is not a factor.
In the 3 autonomous vehicle accidents that come to mind, all 3 involve a non-automated object violating right-of-way rules. The reason to keep testing, improving, and eventually accepting autonomous vehicles is to eliminate those violations.
If the car did not attempt to brake because the woman stepped right out in front of it, then that is not a technology failure, and a human driver would have had the same issue. We need to wait and see what the facts are.
Development of autonomous car systems cannot eliminate violations by other (non-autonomous) items or persons – at most it can only hope to improve the response to such violations.
This is the latest in a long line of accidents involving autonomous vehicles, and the second fatality. Also there are reported many examples of speeding and jumping red lights.
The current state of the technology is not developed enough to be allowed on public roads. Any other technology that has killed two people in such a short space of time would be recalled and revised.
Current testing in cities is putting the public at risk and should be stopped instantly.
I hope our government rethink their push to get these vehicles on our roads, but this is doubtful.
How many deaths have been caused by humans in the same period? What are the percentages / statistics? What are the FACTS?
In what period, 10pm on Monday night Arizona time? That’s a time, not a period.
I think it only fair that we wait until all the facts are known.
Whilst the fatality is tragic would this incident have garnered the press coverage if the vehicle was being driven solely by a human? CNBC estimated that there were around 40,100 traffic related deaths in the USA in 2017 – nearly 110 a day.
Autonomous vehicles offer the opportunity to reduce these casualties significantly by taking the ‘human element’ out of the equation ; imagine roads with no road rage, no driver inattention due to texting, no driver distraction by passengers etc.
I would be very interested to know why the vehicle was apparently exceeding the posted seed limit; however, I suspect this may be due to the ‘dumbing down’ of GPS accuracy for security reasons.
it has been revealed that the sensors that UBER chooses to use are not very good at detecting human beings. As an Engineer, I can say that until standards are put in place deployment cannot happen. The standards process SHOULD have been going on at government levels in parallel but has this happened during Obama’s administration at all? i doubt it.
I voted continue the testing.
But i qualify this – with Uber employees as the pedestrians in a closed to others environment.
Uber are unlicensed in London, there is a reason for this, apparently they are not entirely trusted!
It seems cold and harsh when there is human life involved, but surely the most important – indeed the only – question is; are they safer or more dangerous than human drivers? If human drivers cause 100 fatalities in a given area or within a certain time, and robots cause 95, then the robots are safer – even if all or some of the 95 deaths are caused by a method that could only be caused by a robot
once again, until standards are created at high levels of government nothing should be put on the road. Clearly irresponsible people within the Obama administration have failed to establish standards and now a death occurs.
Continue testing but in a controlled environment which has all of the identifiable hazards/ scenarios of the real world.
Stop all public road testing until the investigations are concluded.
Let the tech and auto co’s test as much as they like in their own plants, campus’s and test grounds but not where the public are unknowing participants in their battle of the autobots.
What is the point of driverless cars? If people want to travel, but not drive, we already have systems that allow that – railway trains, buses and coaches, and to take you to the door of your destination, the taxi.
Another point is that if driverless cars do take over at some point in the future, people will lose the ability to drive because they won’t need to learn. Where, and how, will all these driverless cars be parked while the owner is at work or whatever he/she might be doing at the destination? If the answer is that individuals won’t own the driverless cars, but just hail them as required, then there’s not much difference between that and hailing a taxi.
Some of these calls for AI to be removed permanently from the roads are frankly ludicrous,imagine what the world today would be like if the same was done to fossil-fuel powered vehicles way back when the first fatality was encountered. I’m presently on 12 crashes where other motorists have “not seen me” & slammed into the back or turned out in front of me,& that has been always the other driver’s fault. I’m hoping to still be alive when I can go somewhere via an AI car,safe in the knowledge that I’m unlikely to have some other motorist crash into my vehicle. I used to ride motorcycles years ago,I resumed riding again just a few years ago & there is nothing nearly as stressful as travelling around the streets having to brace yourself for the imminent impact with that car that just pulled straight out in my path & then within minutes of my breathing returning to normal having somehow avoided the collision,the exact same occurs elsewhere!
The first motor car on the road in the UK was in June 1895. The first pedestrian killed by a car was in August 1896 (the car was travelling at 4 mph). I wonder if there was an outcry to stop the development of the horseless carriage as a result. In 2013 the World Health Organisation called on governments to do more for the safety of pedestrians as more than 270 000 pedestrians lose their lives on the world’s roads each year (accounting for 22% of the total 1.24 million road traffic deaths). If driverless cars reduce the number of accidents, it is worth pursuing isn’t it?
I am of the opinion that humans make more mistakes than machines, they do not fall asleep, they do not tire and with the right programming will eventually replace human drivers and in doing so will drastically reduce the death toll on our roads. I personally love driving and do not like ‘automatic’ cars however there will come a time when my skills in driving will be less than adequate, I then become a danger to myself and everyone else, pedestrians and drivers alike. I do not have to prove my competency until 75yrs old?
If the observer was sat in the drivers seat, he is ultimately going to get the blame……. the same as using cruise control on a normal vehicle
He was indeed in the driver’s seat.
As echoed above, two issues regarding the software or detection systems are obvious.
Regarding the speed issue, I can’t believe that the software is so inaccurate that it cannot control the vehicle to a pre-set limit, which raises the mater of whether the limit advised to the vehicle was correct. My current satnav changes the colour of the speed limit icon to yellow at Limit+1mph. according to the available mapping. Was the mapping showing the limit to be 40 mph and not the 35mph indicated by the Police? In my locality there are areas where the maps are incorrect compared to the road signs.
The detection systems did not ( ?) detect the pedestrian ( even if jaywalking) and apparently the vehicle did not attempt to brake. This is a definite concern, because a human would have attempted to brake and perhaps she might have survived.
I am still at a loss to see who is going to benefit from ‘driverless’ cars as a capable human is going to have to be in charge for many years, in much the same way as for a Provisional Driver. Commuters are still going to be stuck in the same traffic.
Please see my comment above in response to Andy. There are many people who are unable to drive themselves but could use a driverless car. The cost of taxis could come down if the need to give drivers a decent standard of living was removed.
as i am an Engineer, i can say that the sensors on board the UBER autonomous vehicle were not good enough to detect humans. while it may be true others use infrared, lasers and others for this purpose, UBER chose not to. As i said in other posts standards should have been made for autonomous vehicles and should have almost a military grade standards compliance established such as FDA. (any new device that makes contact with the human body requires this standards compliance.) clearly there are no standards right now else the death would not have occurred.
Ron: You speak as a true engineer. Your responses have been the most intelligent. Therefore, if sensor limitation is proven as the main contributing factor, this will have repercusions. The issue of standards is mandatory, as it has been proven again and again that private businesses (like Uber, or any other for that matter), is greedy enough to cut corners. Standards, while seldom 100% correct, are an indispensable requirement. Amclaussen, 39+ years in engineering.
it is not so much what is known to be programmed into these vehicles which fails to operate due to “soft controls” but the fact that they are increasingly able to “make decisions” one of these could be “Not to brake” when braking was necessary. thus causing an crash. Nobody knew that smart TVs were able to watch you in your home so this type of programming is not only possible but could be probable. I.e. if a car were on a collision course with a baby and a Dog it could choose to hit the dog. if it were a case where the car could choose to crash and kill the driver rather than a person at the bus stop, it could do it. Noting that the driver is sitting doing nothing wrong!
very worrying times
if you cant drive it, park it.
I have found at least two cases of the manufacturer employing Firmware manipulated for objectives that are not in the benefit of the consumer, who had paid a lot of money and was being deceived (and betrayed) with that Firmware: The first one was the intentional contrast reducing firmware placed by Panasonic in their “Viera” plasma TVs. The owners paid large sums of money to get the very best image quality from their Plasma TVs, but discerning TV Magazine reviewers discovered that the Panasonic plasmas were losing their magnificent initial contrast ratios soon after being used for the first time, in a matter of weeks. It happened that Panasonic “designed” the Firmware to intentionally REDUCE the highly touted contrast, perhaps in order to meet other specs, like the rated panel life. When asked by discerning owners to make available a Firmware fix for that manipulatiuon, Panasonic rejected them, and that ended in a large group of consumers pushing a “Class-Action Demand” against Panasonic, and as there were so many people jumping in that the money paid by Panasonic was too Little compared to the Price of a new TV set. But the thing is that the manufacturer played foul with the unsuspecting consumers! The second one was when I found that the Lithium battery cells of several laptop computers that were displaying urgent messages asking the owner to plug the charger immediately, at closer and closer intervals (in order to convince the laptop user that he needed to buy a new battery pack), were in reality capable of fully recharging to above 90% of their “as new” capacity. (And I have used those cells to power several electrical hand tools, for several years… Apparently, the Laptop manufacturers embed a firmware into the battery pack chips, that count either the number of charging-recharging cycles, or the date, or any other parameter,in order to FOOL the consumer, so that he/she is pressed to go running to buy another battery pack, which happens to be very expensive, and is almost completely incompatible or interchangeable, so that he is pressed to buy it from the original manufacturer, when in reality, the battery pack still has years of useful service ahead, but the damn firmware keeps displaying the message and interrupting the work of the user. So, yes, Software is maluciously used by the manufacturers themselves to cheat and extract more money from the very consumers that favor them with their purchases. Soon (if not right now), car manufacturers will provoque “nuisance failures” to send the drivers running for their “needed service”, and who will stop them from doing that? VW has already cheated on emissions calibration, so it is entirely possible to execute almost any cheating. Thus, as software is used for even the simplest tasks in modern cars, bugs and tricks will abound. A VW Jetta Generation V (called “Bora” in some markets) has already replaced the reliable two-filament lightbulbs in the rear lights with a computer cycling the current to single filament lamps at 100 Hz, so that the same bulb or lamp will display two intensities, and allow continuous DC to signal that the driver is braking. So this is software (ab)use for even the simplest tasks, and while the stupid programmers and automotive designers feel that they got “an accomplisment”, where in reality they are using a computer and software to execute a critical task that was PERFECTLY done without any electronic “wizardly”, with REDUCED reliability and planting posible failures that are much harder to detect and fix. In their quest for an exaggeratedly computerized cars, a lot of undesirable results will creep in. If present day car manufacturers cannot produce really reliable cars, and have to issue numerous “recalls”, Why do we have to believe they will be able (and willing) to produce the 9.999 % reliability needed for an autonomus vehicle?
This is the latest in a long history of pedestrians being in the wrong place and being killed or injured as a result. Had the car mounted the sidewalk and struck the pedestrian then no question – driver / software error. Here the car was where it should be – on the road. The pedestrian was not on the sidewalk, nor on a recognised crossing – in fact was jaywalking which is a criminal offence in much of America. So the fault is 90% with the pedestrian
Had the pedestrian been struck by a moving train, who then would we blame?
Interesting viewpoint in regard to pedestrian behavior. There is a law in human nature that is as valid as Murphy’s laws, and it applies everyday: The Law of Unintended Consequences”.
Here in Mexico City, the City Major, aided by a few self appointed “experts” issued a new Traffic Regulations Set months ago. It was pompously called “The Mobility Pyramid”.- That improvised document dogmatically dictated decree places the pedestrian at the top of the Pyramid, followed by Motorcyclists, then Emergency vehicles, then Delivery trucks, and lastly at the bottom, that “enemy of the environment” that for the leftist government is the common private car driver. This regulation has had interesting consequences: Pedestrians are walking across roads, streets and everywhere WITHOUT EVER LOOKING at both sides BEFORE crossing, as if by the government decree was generating a ‘protecting bubble’ of some magic kind of force-field! The fact that the local government has placed them above every Physic’s Law, means that a car that has a mass of more than a ton, and requires SOME meters to stop at ANY reasonable speed, has to stop instantly, so the driver has to GUESS WHEN a stupid pedestrian that can stop in less than a metre (or half a metre), is going to jump in front of the vehicle without ANY consideration for the laws of PHYSICS (because the City Major and his “Experts” said so…). Since the effective date that stupid “Mobility Pyramid” was put into use, we have seen an alarming increase in accidents where the pedestrian gets hit by cars and busses, with or without the “help” of a distracting cellphone. Motorcyclists are hit daily, because, as soon as the local government draw additional spaces on the floor in front of the cars at every stoplight., thisaction had an even more interesting consequence: Now the motorcyclists FEEL an irreplimible desire and urge to get in front of the cars ASAP, so they forget the regulations and go swaying between the cars and buses in order to reach that “special” space in front of all 4-wheel vehicles, and in their tempestuous urge to get there, they “want” the drivers of cars to GUESS when and exactly where they are going to swerve and change lanes, and because they are traveling between lanes, if a car driver happens to change lanes, it is very possible that the motorcyclist is going to get directly at the side of the car or bus EXACTLY when the 4-wheel vehicle starts to change lane, and this happens because a motorcycle is quite narrower than another car and is therefore easily located at the “Blind-spot” in the side mirrors of the car or bus. Extrangely enough, the motorcycle driver gets completely mad, as HE/SHE is convinced HE/SHE has the absolute right to go faster and freer to get to his/her “special place” in front of all the cars. THAT is the Law of Unintended Consequences at work: a stupid lawmaker seated at his/her desk starts to write new laws and regulations WITHOUT any regard to human response or “adaptation”. Present day pedestrians seem to ignore what we were teached many years ago: Always look in ALL directions before stepping down from the sidewalk.
Eventually ‘driverless’ cars will be far better and safer than those cars driven by humans.
Once the required software and hardware are properly sorted out, computer controlled systems will respond very much faster than humans can.
If every revolutionary engineering breakthrough had been abandoned because of some consequential tragedy, we wouldn’t have any trains, cars, aeroplanes, etc.
We should just accept that there will always be occasional failures with any engineering system, especially with a completely new system.
People keep saying, “what will be the use of driverless cars?” – well, the obvious answer is that they will eventually replace all the incompetent drivers we see on our roads every day.
Even just considering the aging population who, sooner or later, become to old to drive competently – these older people will just be able to get into their cars and ask Alexa to take them to wherever they want to go.
Dependence on AI needs to be under very close scrutiny. It is not possible to programme in advance for every conceivable scenario, and at least human beings have an ability to do something at the time and usually get away with it (even if it isn’t the right or best thing with hindsight) at the time. AI is killing (no pun intended) human ingenuity and deskilling too many tasks for no real gain other than reducing the costs of human resource management.
Driverless things and AI are being pursued fundamentally because they can. This is not always a good thing. As always technology is supposed to be a tool to help humans do things they otherwise couldn’t.
Any time fatalities are involved in any aspect of engineering, there must be a complete and detailed audit of the failure of engineering that led up to the event. There is no denying it. Typically in these situations, there could be and should be a suspension of the technology application, until such time as a full analysis is completed, and the issues corrected in total.
As,from the comments shown, there seems to be some confusion about the speed of the car and/or what and where the pedestrian was at the time of the incident, I think we should reserve judgement until the the full and correct facts are known. In the meantime I agree that this particular AI trial should be on hold.
Only a matter of time before something like this happened. How many computer systems involved and is there majority voting involved between the (independent) systems?
I vote none. I agree with Chris Elliot.Wait for the facts surrounding the accident. I would go further and ask ‘If the car had a driver’ would it have been reasonable to for the driver to avoid the accident? If the answer is yes then the technology needs more development before being let loose.
I have worked on sudo-autonomy and the when I left the field decision making was a little ‘thin’ i.e. do I risk hitting a pedestrian or other car OR do I crash into a shop and risk injury to the autonomous car passengers???
P Hodkinson
as electronic and software engineer i am surprised that these autonomous vehicles are not using industry standards ; i.e. for medical devices, FDA approval is required as these devices connect to the human body. The testing that is done with FDA is very rigerous. Standards creation should be done in parallel with the technology. However, it appears that has not happened. And why is that? since this has been created during Obama’s administration, they are clearly responsible for not creating these standards. Also, UBER chose to use sensors that are not very good at detecting humans apparently. To be more robust, redundancy is also put in place. Has any of this been done?
as far as accidents thus far;
1. UBER had turned over vehicle about a year ago in Arizona deployment.
2. UBER now is now distinguished with killing a woman.
3. Tesla autonomous vehicle could not distinguish a sun glare from the side of a big rig truck and nothing there.
clearly irresponsible people are working on this. Another scary fact is that Obama’s entire staff NEVER worked a legitimate job before coming on staff. Did they do ANYTHING with respect to making laws for autonomous vehicles??
Rahm Emmanuel – Obama’s first chief of staff – was a lawyer. Is that not a legitimate job?
The problem with Lawyers (and most politicians) is that they are complete ignorants about the most basic scientific and technical disciplines, which make them lousy lawmakers. Nowadays, even young inexperienced “engineers” are way undertrained and generally lack engineering criteria, so they commit all clases of mistakes. They rely too much on what the dumb computer told them to do, and seem to lack the most basic “common sense”. That explains the multiple failures of modern products even when materials and subassemblies have been getting very advanced.
Would some Engineer please tell the politicians, far too many are from that other so-called profession and HMG before any more tit-for-tat knee-jerk reactions occur?
The electric car will go through an ordeal similar to the planes, and will be perfected with every accident that happens. I think the tests should continue, just as the planes were not banned because of the danger to the traveler and the population that could be hit in an accident with a fall
There is little comparison between the testing of aircraft and that of autonomous cars. Aircraft are rarely flight tested over highly populated areas; usually over open sea or open countryside; thus there is very little danger to people on the ground (only the minimal crew). Autonomous cars need, by definition, to be tested on populated roads with high traffic/pedestrian activity to ensure they are safe. Currently they are not!
In most accidents involving a pedestrian, that unfortunately and very sadly dies, there are usually no camera’s and only the drivers and any witnesses opinions. In this case we apparently have multiple camera’s and sensors and an observer and possibly even witnesses. So should we not wait until all the facts are available and presented before deciding that driverless cars can only be allowed to do 4mph and have a person walking in front with a lantern! As an opinion: I drive a Honda Accord with an accelerator position sensor, If I jamb my foot on the accelerator to get out of trouble, all of a sudden nothing happens, I have to wait for the hardware/software/scan/interrupt to catch up, I would suggest that my adrenalin flooded brain and a mechanical carburetor would be very very much faster than that. Maybe AI needs the ability to panic and react, it needs a fight or flight reactor. We have technology in it’s infancy driving cars! Why then don’t we let immature 10 year olds drive cars? The 2G, 3G 4G driverless cars will be better but if we let commercial ventures be in charge of the rules they will always be forced by economics to get the latest hardware/software on the road, I believe we need a suite of tests and independant test facilities for driverless cars as we have for cars themselves.
If You find the slow response of your electrically actuated accelerator pedal undesirable, I find them undesirable, un-needed and idiotic! The “Drive-by-wire” accelerator pedal has already caused quite a few accidents in the Toyotas, and while the company paid huge sums to “settle” the consequences of the unintended acceleration produced accidents (with several deaths), the TRUE combination of causes we will probably never know or get the details. There were at least two most probable failure mechanisms: Bugs in the software, and hardware failure, either by lead-free solder growing whiskers And/Or solder failure.
In a way, both mechanisms were caused by the phanatic trends generated by ambientalistic orientated but technically ignorant people and/or politicians. European politicians became obsessed by the presence of lead in the environment (RoHS), and while the actual quantity of lead released by the solder in electronic waste is quite small, ignorant politicians, incapable of making even the simplest calculations, assumed that lead in electronic solders was so much (in reality it isn’t) to be “prohibited” (Eco-phanatics love prohibitions), and thanks to their obssession, lead-free solders are now producing many failures that contribute to MORE electronic waste (paradoxically), as well as accidents. On the other side, the same Eco-phanatical obsession has driven inexperienced automotive designers into using more and more software-operated devices, in a ridiculous quest for minuscule emissions reduction (one of the measures is using such accelerator pedals, while a much simpler Bowden-type cable operated throttle plate accelerator is both less expensive and much better at producing a quicker response and avoid any possibility of software/electronic hardware induced failures. Commercial pressures from the fabricators of such electronic accelerators and the stupid insistence of (ab)using electronics and software in automobiles has REDUCED reliability and repairability. At least there is a little device (albeit electronic) that can speed-up the slow response of your accelerator pedal, just search for it at Sport Driving accesories vendors. Those will remove a sizeable partof the lag you observe, and are not so expensive.
These cars technically do not have any A.I. they work based on sensor information not intelligence. If some one happens to step into the path of a car they will have to take the consequence of their actions. These vehicles are being pushed by various governments to basically stop people speeding or losing control due to being drunk or ill. They think this will solve the problems of bad driving behaviour on the roads today & in the future. Until a car can react like a person instinctively with years of experience of good driving then you are always going to get accidents with driverless vehicles!
There is no way an autonomous vehicle will ever safely interact in a real-world environment with other human-driven vehicles or with pedestrians, and /or domestic or wild animals. There is a major danger here, we charge the driver of the vehicle with responsibility for the actions of their vehicle when the owner/driver/operator does not have to drive the vehicle they will lose the ability to safely intervene when a life-threatening situation arises. In short, people will forget how to drive and become incapable of taking control of the robot. This robot, like a savage dog, has taken a human life and has broken the most important of Asimov’s laws of robotics. It must be destroyed and the technology banned forever. Autonomous vehicles are more dangerous than guns. We need to abandon any idea that they will ever have a place in society. They are an abomination. A crime against humanity.
There are some great but several very worrying comments here as well. Just becasuse someone might be jay walking dose in no way legitimise them being knocked over and killed by Ai. There will always be cases where people have to cross a road in unusual circumstances. Maybe a police officer running to someones aid, maybe someone fleeing a dangerous situaion. Plus jay walking is common place in many countries. Also the decision for ai to hit an animal instead of human. What happens when a group of children are dressed up as animals off to a fancy dress party. What does the ai hit then. It will happen. I work in machine vision we are years away from this tech being safe there are just too many variables in the real world. The decisions are not made by the engineers but by money hungry marketing and managegment pusing to be the first pioneers.
Statistics are all well and good but if you’re the person hit and killed then its 1in1 or 100%. A step back and safely managed implementation of this technology is required.
“I work in machine vision we are years away from this tech being safe there are just too many variables in the real world. The decisions are not made by the engineers but by money hungry marketing and managegment pusing to be the first pioneers…!
Martin: You are the first engineer to say clearly and without any doubt: WE ARE YEARS AWAY (BEHIND) IN MACHINE VISION TO BE SAFE…
That is the absolute truth. Machine vision has a lot of limitations that far surpass the limitation of the sensors in themselves, but the “intelligence” of the sensor readings and “handling” is still worse. Your saying that this stupid and irresponsible push for so called “Autonomous Vehicles” is badly sustented and idiotic intervention from “Bussiness Gurus”, that do not even start to understand the technology, is the real reason behind, and is going to produce more and more accidents. This concept needs to be re-evaluated and reassessed from the beginning. Why not perform a competition in which intelligent engineers devise the many ways to fool the machine vision in order to demonstrate the true limitations of the concept?
It appears (hopefully to be confirmed from the full investigation) that the error was nothing to do with the vehicle and simply one of human error. I am reminded of the DC10 crash at take-off in Paris in the early 70s. A lowly ? baggage handler failed to ‘lock’ the cargo door: (human error) the resulting collapse of the cabin floor damaged the control mechanisms (then analog, not digital/radio managed) and the pilot has helpless to stop it crashing. Over 300 (including some known to MJB -members of a rugby club in Macclesfield) died. Sadly, what was probably the safest and most advanced aircraft (including the potential for RR who supplied the RB211 engines) ever designed up to that point was damned and no further sales ever occurred.
More sadly, the full story was not the most important aspect reported by the meja!
To be fair Mike, I believe there was a flaw in the plane design. I think that cargo door opened outwardly, in order to save cargo space, meaning it blew off easily. (In fact, the operator thought he’d locked the door, but it was just jammed and the poor design allowed this). Subsequent designs have doors opening inwards, so they can’t be blown out when left unlocked.
We don’t know yet if this car accident could have been averted with better design. However, I assume that, if true, future designs will be far improved and safer, through development. Maybe damning a failed design (like the DC10) to zero future sales is a good thing.
Indeed, THIS car may even already be the safest ever to be on the roads (like the DC10 apparently was), but everything can benefit from development improvements.
Absolutely! there were SEVERAL flaws in the design of the Douglas DC-10, from the cargo door mechanism, to the design of the floor (which collapsed entirely, tearing down the wiring and Hydraulic tubings) to other things. There is even an entire book about how those flaws got into the certified plane, and took some heavy effort to sort out, not before several crashes…
One of the golden rules in Business Process Automation is not to automate the wrong process. In industry, when a new bit of technology comes along it provides opportunities to improve or automate an existing process. It appears the same is happening with driver-less cars but if you stand back from the whole thing and start afresh with our current capabilities, it is doubtful if you would come up with the current solution. We’re trying to put the highest level of available technology on a system originally developed for a horse and cart, so without radical change in the whole system, sub-optimalities are inevitable. Taking the “human element” out of the equation just isn’t possible on our network of roads since you would also need to remove all cars with drivers and all pedestrians, animals and any other unpredictable factors around them. The real world is so complex; are a few lines of code going to make the best determination of how to react? If “apply brakes” is the only output when something enters the path of a vehicle, then it’s easy. As any driver knows, with a steering wheel at your disposal, the best algorithm to determine avoiding action is immensely more complex than that. I don’t envy those responsible for software design…. or all the people sitting in the driveway waiting for their cars to download the latest software update before taking them to work!
We need a debate about how much safer self driving cars need to be than cars driven by people. The answer will probably lie between 1% and 10%. ie self driving cars would kill between 1% and 10% as many as normal cars. 10% would convince a lot of people but would still kill one person per day in the UK, 1% is less than one a week which should convince most people.
This case raises an interesting point for the future. If fully autonomous vehicles are eventually allowed on our roads, based on the premise that they are safer than human drivers, then who is responsible for the inevitable fatalities that follow ?
Do we just accept the situation that you could, through no fault of your own, be killed by a ‘robot’ and no-one’s to blame because statistically they are safer than humans ? I cannot see that this situation would ever be politically acceptable, and the public would probably react against the technology. I believe a lot more thought is needed before this technology is fielded.
And another point for consideration, could anyone honestly see autonomous vehicles operating safely in countries like India, for example ?
Having just seen the footage released by the police of the accident, I do not think the fact the car was autonomous is the issue. he woman was strolling across the road with a bike wearing dark clothes. the ‘driver’ on seeing the woman in the headlights has no time to react, I think this would have been the case for almost any driver. However, the autonomous systems should have picked up an obstacle crossing in front, before it was visible to the driver, and hence there is a system flaw.
“the woman was strolling across the road with a bike wearing dark clothes. the ‘driver’ on seeing the woman in the headlights has no time to react, I think this would have been the case for almost any driver.
Chris’s comment reminded me of an incident in Edinburgh in May 1961. I was driving my car (unusual for a student then) in the dark away from attending a Scottish University Tory (yes, I know, we all make mistakes when we are young) gathering (two of Thatcher’s Ministers were also there) and perhaps NOT taking as much care as I should. I was part way across a zebra crossing when out of the corner of my eye I was an elderly couple literally stepping back in horror as they realised that I had not seen them. How different my life might have been had I hit them? Perhaps the Gods were looking out for me…and this couple.
What it did was make me ultra careful throughout the rest of my driving career (which continues) at any place(s) where I realise – and build into my thinking-that pedestrians and moving cars are in close proximity. About 40 years ago I broke my ankle (as one does jumping off a children’s ’round-about’ having been admonished by ‘she who must be obeyed’ for not taking the children out for weeks. [I had been in the USA helping to improve the Engineering and productivity of the process of carbonizing the material for the Space Shuttle booster nozzles) .
I was on crutches for several weeks. If I learnt nothing else it was that those with the misfortune to have some disability are very vulnerable. On crutches, one realises that one cannot move any faster than one is doing: and that even though on a specified crossing point, IF the vehicle you can see coming towards you does not STOP, you will be run over! It concentrated my mind amazingly as the driver in that situation later. I have always since given anyone on a crossing inordinate amounts of time to do so: (sorry if you have been the driver behind me, but…)
Is it possible to ‘patch-in’ to the software of autonomous systems, those now being tested and sadly, occasionally examples of these episodes -as described in my personal experience? Might there be merit in doing so?
Am I the only one wondering why a woman dressed in dark clothing chose to push her bike across a dark street without keeping a look out for vehicles? If a car with a driver had hit her, it wouldn’t even have made the local news.
I suspect the visual detection system was at fault. Being involved in the light guard industry over many years I can remember when the first Laser scanning units were trying to gain safety approval for guarding access points to robots. A well known and respected German company spent a number of year perfecting their equipment to detect black corduroy trousers working into the danger zone. It took a lot of expertise and money to perfect the optical systems and then meet the requirement of the independent safety authority. Even after all this work there was still concern for some years. This was only guarding in well defined areas and with the sensor stationary or moving at 4MPH.
I’ve seen no mention anywhere of an independent authority certifying the detection systems which is mandatory for all guarding equipment in factories. Why should we allow what is a software company to lash together commercial detector systems I suspect not designed for safety application and then let it loose on the general population.
The system probably would not pass the requirements for AGV used in factories at 4MPH with in the close proximity of operatives covered by the Health and Safety at work act.
Gordon: I could not have said it as well as you did. Evidently, this stupid rush to get autonomus vehicles as soon as posible on the roads at the same time thse roads are full of imperfect beings, is completely wrong! Lets see if we can obtain ANY decent details about the sensors used by the company, and HOW MUCH solid engineering went behind their implementation into their rushed “solution”.
A human can see pedestrians (usually) about to enter the roadway, and far back enough to make adjustments, and cover the brake pedal just in case. Did AI in fact “see” the pedestrian, or have code present to force and adjustment in speed, trajectory, or prepare to brake? None of the above? Oh, that is too bad.
Let’s look at the facts here. From the videos provided :
1) The safety driver was clearly not paying much attention to the road ahead, instead she was frequently glancing away and downwards.
2) The car did not detect the woman crossing the road in front of the car.
The woman did not just step into the road. The car was driving on the right hand side and the woman was crossing left to right. As such she had already crossed the other side of the road.
That it is dark is unimportant in the context of this particular car.
a) the car should have detected the woman ( – it did not)
b) the car should have braked and attempted to avoid the woman.
to John Sally Swigglebottom’s point – I *thought* these autonomous cars were supposed to use a variety of “eyes”, including versions of RADAR – As John says, the pedestrian had been crossing from the left lane into the right lane – why on earth did the RADAR not seen the pedestrian? Were they *only* relying on visible light vision?
Also, why did the pedestrian not notice the on-coming head-lights, but that is another story.
RADAR signals, even when optimized for nonmetallic objects, depend on radio signals reflectivity, which brings another “reflection” (no-pun intended): The pedestrian was pushing a Bicycle, which has a “larger” Radar reflection because of its relatively long metal tubes. Therefore, the RADAR sensor used was not the proper one, or was badlyimplemented, or the software decided it was not a human crossing in front… in fewer words, the system DIN NOT work.
Re: Gordon Oscroft’s comment
“A well known and respected German company spent a number of year perfecting their equipment to detect black corduroy trousers working into the danger zone”.
I have also had a number of detection issues in the past, using infra red scanning for pedestrians, some of whom were wearing highly reflective safety clothing, as the highly reflective clothing failed to detect and sort of defeated the purpose of wearing it.
I fully support driverless vehicles as it can offer many advantages in terms of car sharing, congestion management and most importantly, safety BUT, how was an autonomous car breaking the speed limit and how did it manage to hit a pedestrian? I would support a moratorium on testing until all the facts from the incident are made public and necessary “prevent recurrence” steps put in place so this does not happen again.
One second she was not visible-the next second she was-The accident was not( not) seeing the car as she was entering the natural stream at a risk- the problem to be solved by the car is how fast and at what angles it should have seen the danger-From a train with a driver in position to a fully alert driver there is a distance that the car will stop and no matter what the circumstance this will always be so. There will always be a balance between users on the road -no system will ever give zero accidents -unless we stop where we are-?