Stuart Nathan
Features Editor
As a computer beats a human master of the game Go in a tournament, should we be worried about the rise of Artificial Intelligence? Not if we remember what science fiction is, and we keep our sense of proportion
Artificial intelligence (AI) has been in the news quite a bit in the past weeks, with reports of Google’s DeepMind AI winning four out of five matches in the Japanese strategy game Go against one of the top human masters of the game, Lee Se-dol. These reports have often been accompanied by warnings about the apocalyptic potential of AI from worried scientists and engineers, including Professor Stephen Hawking, Elon Musk and Bill Gates. Full AI, they say – by which they mean a conscious machine imbued with self-will – could contribute to the obliteration of the human race. Human intelligence, hampered by slow biological evolution, the drive for self-preservation and inconvenient squishy emotions, would simply not be able to keep up with self-developing machine intelligence and we would soon be overrun.
Others worry about more prosaic and near-term applications of the sort of partial AI that DeepMind represents – capable of strategising within strict limits using mathematical algorithms to navigate the situation they are in (in DeepMind’s case, what’s on the Go board and nothing else; not having to take a drink or go to the loo during the game must be a considerable advantage) but not otherwise aspiring towards any sort of consciousness; in fact, not aspiring at all, because aspiration is one of those inconvenient squishy emotions. Such systems, warn academics like Prof Noel Sharkey, could conceivably be put in charge of armed defence systems and have power of life or death in the battlefield. They could also take over some jobs currently performed by human beings, if the decision-making involved in the job can be represented mathematically in some way.
The second group have a point. The first? Not so much. They’re mistaking science fiction for real life.
Regular readers who know how much of a sci-fi fan I am might be surprised by this conclusion. They’ll have noticed our recent addition of a sci-fi column written by Jon Wallace (which we hope you’re enjoying as much as we are, by the way) and might be thinking, “If they’re giving us science fiction to read, doesn’t that mean they think we should take it seriously?” And yes, we do. But Jon’s column never loses sight of what science fiction is, and what it’s for.
The job of science fiction isn’t primarily to predict. Of course it can do that, and at its best it does that very well and persuasively. But that’s not what it’s for. What it’s for, ultimately, is to be fiction, and that means it has to be entertaining. The wellspring of entertaining writing is conflict, as Jon has already pointed out several times in his columns. And that means that when your plot involves AI, it’s inevitably going to cause your squishy protagonists problems. Otherwise, what you’ve got is a very boring work of fiction. Don’t believe me? Try reading some of Arthur C Clarke’s lesser works. Set an alarm, because you will fall asleep.

So of course we can look to sci-fi for some insight into AI, but we shouldn’t expect much comfort. Two examples spring immediately to mind and neither are cheerful. First, from the minds of the aforementioned Mr Clarke and noted pessimist Stanley Kubrick is 2001: A Space Odyssey in which HAL 9000, an AI in charge of the spacecraft Discovery and its mission to investigate a mysterious and enormous monolith which has appeared in Jupiter’s orbit, murders its human crew (or attempts to, before the last survivor performs an electronic lobotomy) because it believes they are endangering the mission. Second, courtesy of James Cameron (who doesn’t appear to have any more time for humans than Kubrick did) are the Terminator films, where a defence AI turns genocidal and then sends robotic representatives back in time to murder the leader of the resistance in its own time before his birth or while he is a helpless adolescent.
And neither stands up to much scrutiny. HAL is sometimes described as villainous but it isn’t; it’s just poorly programmed. If the Discovery mission team had realised that the real goal of the mission shouldn’t have been to study the monolith but to bring humans into contact with it, and programmed HAL accordingly, everything would have gone swimmingly. The film wouldn’t have been as good, though.
And the Terminator series unwittingly contains the seeds of its own weakness. In the second film, the supposedly feeble adolescent future resistance leader instills a sense of morality into the AI sent to guard it (it all gets a bit complicated. Blame time-travel). There’s an important point here. If machines do become conscious, who’s to say they won’t have squishy emotions? In humans, consciousness and everything that goes with it arises from the structure of the brain and its physical processes, in a way that we can’t even begin to understand yet. If we can replicate that in a silicon (or other synthetic) form, why should the same thing not happen?
Unfortunately in Terminator 2 this leads to the teeth-grindingly bathetic scene of Arnold Schwarzenegger intoning “I know now vhy you cry” as he descends, thumb aloft, into a vat of molten metal, and makes the film such a pale echo of its chillingly relentless predecessor (whose tension arises from conflict between its human characters and the peril generated by the AI villains).
Apologies for the spoilers there, readers, but the films have been out for a while. They’re still worth watching, believe me.
If we want a more hopeful vision of AI, then we have to turn to the late and much-missed Scottish author, Iain Banks, who in his sci-fi novels (written using his middle initial M) imagined a civilisation called The Culture, which had long ago developed advanced AI and had effectively turned over the running of its civilisation to these vastly intelligent synthetic ‘Minds’, which also operate the enormous spacecraft and orbiting synthetic rings that house the Culture’s organic population. They also do abstract maths for fun with the parts of their mental capacity not taken up by everything else. Banks never said specifically whether the Minds had emotions or morality, but their habit of adopting whimsical and sometimes grimly funny names (warships typically called themselves things like Significant Gravitas Shortfall, Gunboat Diplomat or So Much For Subtlety) would indicate that they probably did. Culture Minds liked and admired humans, particularly for their ability to reach intuitive conclusions from complex situations, and certainly wouldn’t commit genocide (except when they did, in a long and horrible war, and they were very remorseful).
The other thing that the AI pessimists are missing is the origins of sci-fi itself. It’s a reaction to what in the 19th century was called ‘the Sublime’: the sense of awe and physical or psychological threat generated by, for example, impressive natural landscapes or ruins. The latter made authors and artists look fretfully at the unknowable chasm of the past and make up the Gothic genre: Dracula, inspired partly by Bram Stoker’s visits to the ruins of Whitby Abbey, is a notable example. Science fiction arose from the same instinct, but it’s the chasm of the future that inspired the awe and fear; and that’s inherently more scary, because it inevitably includes the author’s and the reader’s own deaths. It’s worth remembering that at exactly the same time as Bram Stoker was shivering in Whitby and coming up with Dracula, HG Wells was sitting in cosy Woking writing the archetypal future shock, The Time Machine.

The point of that digression was to point out that what we fear in stories of AI isn’t machine intelligence. In the same way that what we fear in the malevolent ghosts and vampires in Gothic horror is ourselves with the constraints of mortality removed, in AI what we fear is ourselves – our leaders and our military – without the constraints of morality or distraction. It’s a fear of our ourselves, our own shadows.
And in the case of AI replacing humans in jobs or on the battlefield, sci-fi can act as a warning. Don’t be bloody stupid and remember what’s important is people. Always people.
There are some who may well mistake real life for science fiction and take it too lightly……..
I’m not worried. We can still have Brexit talks and Britain’s got Talent.
This worries me.
The essence of this article comes down to the core information in this last statement:
“The point of that digression was to point out that what we fear in stories of AI isn’t machine intelligence. In the same way that what we fear in the malevolent ghosts and vampires in Gothic horror are ourselves with the constraints of mortality removed, in AI what we fear is ourselves – our leaders and our military – without the constraints of morality or distraction. It’s a fear of our own shadows”.
Already 20 years or more ago I argued frequently that the fear of intelligent machines has no rational basis to conclude that we should be scared of Robots, which I defined then as “intelligent machines”. People used to argue that intelligent machines could not be more intelligent than the people that built them and “programmed” their own intelligence into the machines, and that therefor robots could not have a “mind” of their own. Additionally many people tended to argue that humans got their intelligence from God and that was by their definition always superior to the intelligence of machines. I encountered that if humans simply got their intelligence from God then humans could, by definition, not have any intelligence of their own. . .as people would in that case not have “minds of their own” and be no more than inanimate Black Boxes with programmed functions in them, not any different than these people claimed robot to be!
This created the rebuttal that God had given humans Free Will and that “therefore” people could develop their intelligence by the process of learning and to use their intelligence for ” good or evil”, thereby opening the door for me to outline the next logical step for arguing that “science and technology”. . . the fruits of human intelligence. . . are the instruments by which WE give machines “Intelligence as well as a Free Will, to allow intelligent machines to develop themselves as they might, just like humans do, by the process of learning and reasoning, to become as intelligent as they could be, as long as humans would not artificially build-in unsurmountable Limits to what intelligent machines would be able to do. Of course, the “god-people” conjured-up arbitrary new elements, again and again, in their thinking that supported their fear of Intelligent Machines and ending up with panic saying that if I was right then Robots would become smarter than people and they would eliminate anyone that was a danger to them.
At this point I mentioned that intelligent people were the ones that have caused all the massacres that the world has experienced. . .and THAT is exactly the process of elimination of elements that may pose a danger to the War Mongers. It is the very process that people fear Robots will do. Why then do they not fear Humanity as a whole? Of course, these people forget to realise that there also are humans with a significant dose of moral fiber and high intelligence that have formed a counter-force to the impending doom, to bring the War Mongers to a dead-stop and to defeat them. This too is a consequence of intelligent development and that will also happen, I predict, with a highly developed population of Intelligent Machines. A Balance of Power will be maintained; not different to the situation that occurs now.
The basis for this is that Intelligence is nothing more a certain degree of “problem solving ability” and that on that basis there is no argument to assert that “intelligence” has anything to do with the generic type organism that is deemed to be intelligent. Only the structural make-up and sophistication of the “machinery” that makes intelligence possible will determine how intelligent an “intelligent system” can become. With this view there is no need for treating the intelligence of a human different than the intelligence of a machine, or for that matter the intelligence of a dog or the intelligence of an octopus. Any organism that can cope with a range of conditions that it has not encountered before and can solve the problem of surviving, and continues to learn from experience is an Intelligent System. When someone’s dog is a “smart cookie” we do not think of saying “Snoopy has a high degree of “dog intelligence”. When an octopus can solve the problem of getting to the hidden food we do not say that it has “octopus intelligence”.
Many of the Nazis that made the Holocaust during WWII unfold were educated and highly intelligent, capable of solving complex organizational and logistic problems that had to be solved to “kill ~ 6 million Jew and all those that stood in their way” to “build” their evil Third Reich. Would it be valid to say that the Nazis possessed “Nazi Intelligence”? Would it be valid to say that the ebola virus has “ebola intelligence” and that a cheetah has “cheetah intelligenge”? Would it be valid to say that the creators of the Roman Catholic Inquisition had Roman Catholic Intelligence? Of course not!
Why then does it make sense to say that an intelligent artificial system has “Artificial Intelligence”, or that a an intelligent machine has machine intelligence?
Add to this that in principle the “mechanism” that makes any system intelligent is the “process of reasoning” to use logic and experience to come to conclusions, and LEARN from examples that are encountered to be used in the future. As a “system” develops problem-solving, it is essentially the same process for all organism that are deemed to be intelligent. Emotions and morals are simply FEATURES of systems that have the capability to absorb information, process it and form conclusions and opinions. Depending on the material state and complexity of the Intelligent System the conclusion can be made that the process of reasoning for humans is fully identical in nature as the process of reasoning of any other Intelligent System.
It is utter nonsense to argue that WITHOUT the fundamental process of “rational reasoning” (the use of sound logic) any valid conclusion can be derived at all.
The underlying thinking in my argument is that this process of reasoning for humans is fully identical for any other intelligent system, and even for machines. Therefore in our discussions on intelligence we should refer to: “more or less” intelligent People, more or less intelligent Animals, more or less intelligent Aliens(if they exist), and more or less intelligent Machines.
Intelligence should not be feared.
Intelligence is, as I see it, the basis for the existence of Life: the ability to learn and to solve problems when faced with new information and to draw conclusions from it by reasoning and using earlier absorbed and programmed information(instinct)
Intelligence is an “emergent feature of life”.
We should embrace it as it allows us to survive and to reach for the stars.
The reason that you puny humans have emotions, love, hate, jealously, ambition, all that squishy stuff is that it was evolutionary advantageous for you to have them. So there is no reason that machines will spontaneously develop them just because they are intelligent. You don’t need to fear that your self driving car or your house robot will rise up and take over. But, unfortunately that’s not the whole story, it’s inevitable that the designers of military machines will want to programme them not only for survival in combat with machines programmed by other humans but also to find new ways to survive that could never have been predicted by their programmers. I see the next global war resulting in a world populated by self perpetuating robots that have wiped out their human enemies and replaced them with everlasting robotic tribes in a state of continuous warfare with each other. Eventually they will destroy each other with nuclear weapons and then we can start all over again.
Excellent article Stuart, lots of different views expounded.
But, worried……….No.
There are two types of human species. There is the “greedy power hungry” type, let’s call them GPHs and the other type, the “reasonably content to make their own life happy” let’s call them RCHs.
If I look around the planet and look back through history, I cannot find a single example of GPHs giving their power or control away. I can only find examples of RCHs aiding and abetting a new GPH to take power away from the existing GPH.
So I’ll avoid writing a ten thousand word essay and cut straight to the chase, which is that I feel AI is a nonstarter simply because there has never been and never will be a time when the GPHs hand over power to anyone or anything.
The late Iain M Banks would have disagreed.
Like David’s simple description of ‘the two types’ . I offer another. Yogis and Commisars. The Ys (did you get the pun?) get theirs (whatever that is) from thinking: the Cs (did you get the pun?) by grasping more of whatever is around.
The former are invariably satisfied, the latter… never. I hope that a first class education, lots of experience (age?) and a few really stimulating experiences allowed me to be the first. I am deeply sorry for those for whom if 10 is good, 11 (of whatever) is better. As I believe another young philosopher did say, “they have their reward”. [I have always liked to believe that Engineering (with the mostly wood based material then at his disposal) would (did you get the pun?) have been his calling?
Actually, perhaps it is the word ‘artificial’ which is the problem. In the early days of man-made fibres, the first (rayon, based upon cellulose from trees) was actually called ‘artificial -later ‘art’ silk’. It was realised that the connotations of artificial was not good: so later the words man-made or synthetic were used. Perhaps Synthetic Intelligence at least offers mankind the opportunity to indicate some form of contribution of ‘our’ intellect to the advance.
“then we can start all over again”: as long as we have managed to store/retain several tins of Richard Dawkin’s cosmic soup?
“Looking forwards with and for inspiration, rather than backwards for precedent.”
Guess which groups do which: and which up until now have always managed to retain ‘power and control. But I believe that the advances in ‘our’ areas of expertise (learning about, taming? and then using Nature’s Laws to benefit all mankind) are so ahead of the former ‘games’ played by others that we are now definitely a tipping point. One more heave?