Having been briefly shown a list of eight target photos marked either friend or foe, test subjects then had to make rapid decisions on whether to carry out simulated assassinations on individual targets via a drone strike. A second opinion on the validity of the targets was given by AI. Unbeknownst to the humans, the AI advice was completely random.
Despite being informed of the fallibility of the AI systems in the study, two-thirds of subjects allowed their decisions to be influenced by the AI. The work, conducted by scientists at the University of California – Merced, is published in Scientific Reports.
“As a society, with AI accelerating so quickly, we need to be concerned about the potential for overtrust,” said principal investigator Professor Colin Holbrook, a member of UC Merced’s Department of Cognitive and Information Sciences.
According to Holbrook, the study’s design was a means of testing the broader question of putting too much trust in AI under uncertain circumstances. He said the findings are not just about military decisions and could be applied to contexts such as police being influenced by AI to use lethal force or a paramedic being swayed by AI when deciding who to treat first in a medical emergency. It’s claimed the findings could also be applicable to major life decisions such as buying a home.
“Our project was about high-risk decisions made under uncertainty when the AI is unreliable,” said Holbrook. “We should have a healthy scepticism about AI, especially in life-or-death decisions.
“We see AI doing extraordinary things and we think that because it’s amazing in this domain, it will be amazing in another. We can’t assume that. These are still devices with limited abilities.”
Launch of Great British Nuclear heralded as ‘nuclear power renaissance’
Happy anniversary GBN. It is over a year now - and we have not seen much in the signs or auguries of GBN achievements in TheEngineer Or, indeed, apart...