Comment: We should not abandon thought experiment to AI

If we put blind faith in AI, do we risk losing the skills and art of thought experiment, which help humans uncover some of science’s greatest findings? AtkinsRéalis Technical Fellow for Mathematical Modelling, Professor Nira Chamberlain, discusses the need for scientific rigour.

AdobeStock

Data-driven AI is here to stay: it is powerful, has immense potential and will continue to grow. It offers many positive benefits but part of our responsibility, when considering AI-enabled solutions, is to think holistically about the wider implications, the ethics.

At last week’s AI Safety Summit, we continued to hear concerns about AI’s impact upon jobs. However, one lesser explored is the risk of de-skilling – that specific talents and abilities will be lost as AI-driven systems increase. Most of us now use our mobile phones to navigate but what would happen if we lost signal, or our phone died? Would we be able to reach our destination using a paper map? Could we work out a route using landmarks or identify which way was North?

Losing – and gaining – valuable skills

It's not just AI, of course, technological advances have been affecting skills’ retention for many years. My father was an amateur car mechanic and he could fix our car in 10 minutes if it broke down. In more recent years though, he would take the car to the garage – the engineering had become too digital for him. While these technologically advanced cars are undoubtedly safer and greener, and mechanics have benefitted from upskilling in these technologies, there is a danger that we will throw the baby out with the bathwater and the more practical, hands-on skills that my father had will be lost.

There are many skills that still have an important part to play in our future, even with the growth of AI. If we don’t maintain these abilities, we may not understand why and how what we are doing works. Technical drawing is another case in point. In the past, skilled designers used T-squares, protractors, and compasses to develop engineering designs on paper.

If we abandon thought experiments and rely solely upon the findings of AI, we run the risk of never discovering the hidden logic

Now, we have computer aided design (CAD) software. But if we rely on the computer for everything and don’t understand how it calculates measurements or load figures, for example, there is a significant risk for the outputs and designs to be deficient. This could have expensive and dangerous knock-on effects if it’s not spotted before construction begins.

Understanding and questioning

AI has limitations: to use it properly you must understand the underlying theory of the problem you want it to solve. In a previous role, I was using an AI algorithm to classify a retail organisation’s customers but found the AI wasn’t effective. My colleague was convinced it was correct, though, so I offered to do a statistical test as the results came out. My analysis didn’t find any statistical significance in the results coming from the AI algorithm (i.e. it wasn’t successfully separating the groups of customers) so my colleague asked who invented the statistical test. I replied, “Gauss”, one of the most famous mathematicians whose tests have been used by scientists and engineers around the world for the last 200 years. My colleague said, “well, Gauss is wrong”. He was putting blind trust in the computer-generated results and ignoring everything else.

That blind faith – seeing AI as a modern-day prophet, without questioning its prophecies – is dangerous. It’s rather like trusting your decisions to new age numerology. By definition, AI and numerology are not dissimilar. Like AI, numerology uses calculations to go into many layers of depth, with numbers and combinations of numbers assigned as having various meanings; it offers in-depth analysis of patterns; and it discusses future trends. As scientists or engineers, however, we look to prove the hypotheses we develop, and to question existing ways of doing things.

Thought experimentation

One way we create the hypotheses to prove is through thought experiments. In a thought experiment, you consider an issue and come up with a possible logical explanation to explain the cause of that issue. Then you can take steps to prove or disprove your theory. For example, in 1783, vicar and scholar John Michell proposed the existence of black holes, using a thought experiment that argued that the light from a large enough star, with strong enough gravity, would return towards it, making it undetectable to astronomers[1]. Twelve years later, mathematician Pierre-Simon Laplace also independently proposed the existence of black holes, theorising that there existed a body that not even light could escape from and providing the mathematical proof[2]. Both just had paper and pen – no computers, satellites, spaceships, or telescopes. Similarly, astronomer and mathematician, Urbain Le Verrier, predicted the existence and position of Neptune using a thought experiment and mathematical analysis – astronomers found it within one degree of his predicted position.

While the computer algorithms of AI will find the hidden pattern, thought experiments will find the hidden logic.

If we abandon thought experiments and rely solely upon the findings of AI, we run the risk of never discovering the hidden logic. Finding hidden patterns does give us insights of course – in medical research for example, AI can detect patterns that indicate certain illnesses and help practitioners diagnose them. But discovering the logic helps us understand the ‘why’, the root causes behind the findings. So, while AI has an important role to play in our future, it’s important to have the right driver, to bring the scientific rigour and questioning attitude that enriches everything we apply it to.

Professor Nira Chamberlain, AtkinsRéalis Technical Fellow for Mathematical Modelling

 

[1] This Month in Physics History (aps.org)

[2] 2009JAHH...12...90M Page 90 (harvard.edu)