The shock of the not-so new

stuart thumbnailStuart Nathan
Features Editor

As a computer beats a human master of the game Go in a tournament, should we be worried about the rise of Artificial Intelligence? Not if we remember what science fiction is, and we keep our sense of proportion

Artificial intelligence (AI) has been in the news quite a bit in the past weeks, with reports of Google’s DeepMind AI winning four out of five matches in the Japanese strategy game Go against one of the top human masters of the game, Lee Se-dol. These reports have often been accompanied by warnings about the apocalyptic potential of AI from worried scientists and engineers, including Professor Stephen Hawking, Elon Musk and Bill Gates. Full AI, they say – by which they mean a conscious machine imbued with self-will – could contribute to the obliteration of the human race. Human intelligence, hampered by slow biological evolution, the drive for self-preservation and inconvenient squishy emotions, would simply not be able to keep up with self-developing machine intelligence and we would soon be overrun.

Others worry about more prosaic and near-term applications of the sort of partial AI that DeepMind represents – capable of strategising within strict limits using mathematical algorithms to navigate the situation they are in (in DeepMind’s case, what’s on the Go board and nothing else; not having to take a drink or go to the loo during the game must be a considerable advantage) but not otherwise aspiring towards any sort of consciousness; in fact, not aspiring at all, because aspiration is one of those inconvenient squishy emotions. Such systems, warn academics like Prof Noel Sharkey, could conceivably be put in charge of armed defence systems and have power of life or death in the battlefield. They could also take over some jobs currently performed by human beings, if the decision-making involved in the job can be represented mathematically in some way.

The second group have a point. The first? Not so much. They’re mistaking science fiction for real life.

Regular readers who know how much of a sci-fi fan I am might be surprised by this conclusion. They’ll have noticed our recent addition of a sci-fi column written by Jon Wallace (which we hope you’re enjoying as much as we are, by the way) and might be thinking, “If they’re giving us science fiction to read, doesn’t that mean they think we should take it seriously?” And yes, we do. But Jon’s column never loses sight of what science fiction is, and what it’s for.

The job of science fiction isn’t primarily to predict. Of course it can do that, and at its best it does that very well and persuasively. But that’s not what it’s for. What it’s for, ultimately, is to be fiction, and that means it has to be entertaining. The wellspring of entertaining writing is conflict, as Jon has already pointed out several times in his columns. And that means that when your plot involves AI, it’s inevitably going to cause your squishy protagonists problems. Otherwise, what you’ve got is a very boring work of fiction. Don’t believe me? Try reading some of Arthur C Clarke’s lesser works. Set an alarm, because you will fall asleep.

It just wants to love, and be loved in return
It just wants to love, and be loved in return

So of course we can look to sci-fi for some insight into AI, but we shouldn’t expect much comfort. Two examples spring immediately to mind and neither are cheerful. First, from the minds of the aforementioned Mr Clarke and noted pessimist Stanley Kubrick is 2001: A Space Odyssey in which HAL 9000, an AI in charge of the spacecraft Discovery and its mission to investigate a mysterious and enormous monolith which has appeared in Jupiter’s orbit, murders its human crew (or attempts to, before the last survivor performs an electronic lobotomy) because it believes they are endangering the mission. Second, courtesy of James Cameron (who doesn’t appear to have any more time for humans than Kubrick did) are the Terminator films, where a defence AI turns genocidal and then sends robotic representatives back in time to murder the leader of the resistance in its own time before his birth or while he is a helpless adolescent.

And neither stands up to much scrutiny. HAL is sometimes described as villainous but it isn’t; it’s just poorly programmed. If the Discovery mission team had realised that the real goal of the mission shouldn’t have been to study the monolith but to bring humans into contact with it, and programmed HAL accordingly, everything would have gone swimmingly. The film wouldn’t have been as good, though.

And the Terminator series unwittingly contains the seeds of its own weakness. In the second film, the supposedly feeble adolescent future resistance leader instills a sense of morality into the AI sent to guard it (it all gets a bit complicated. Blame time-travel). There’s an important point here. If machines do become conscious, who’s to say they won’t have squishy emotions? In humans, consciousness and everything that goes with it arises from the structure of the brain and its physical processes, in a way that we can’t even begin to understand yet. If we can replicate that in a silicon (or other synthetic) form, why should the same thing not happen?

Unfortunately in Terminator 2 this leads to the teeth-grindingly bathetic scene of Arnold Schwarzenegger intoning “I know now vhy you cry” as he descends, thumb aloft, into a vat of molten metal, and makes the film such a pale echo of its chillingly relentless predecessor (whose tension arises from conflict between its human characters and the peril generated by the AI villains).

Apologies for the spoilers there, readers, but the films have been out for a while. They’re still worth watching, believe me.

If we want a more hopeful vision of AI, then we have to turn to the late and much-missed Scottish author, Iain Banks, who in his sci-fi novels (written using his middle initial M) imagined a civilisation called The Culture, which had long ago developed advanced AI and had effectively turned over the running of its civilisation to these vastly intelligent synthetic ‘Minds’, which also operate the enormous spacecraft and orbiting synthetic rings that house the Culture’s organic population. They also do abstract maths for fun with the parts of their mental capacity not taken up by everything else. Banks never said specifically whether the Minds had emotions or morality, but their habit of adopting whimsical and sometimes grimly funny names (warships typically called themselves things like Significant Gravitas Shortfall, Gunboat Diplomat or So Much For Subtlety) would indicate that they probably did. Culture Minds liked and admired humans, particularly for their ability to reach intuitive conclusions from complex situations, and certainly wouldn’t commit genocide (except when they did, in a long and horrible war, and they were very remorseful).

The other thing that the AI pessimists are missing is the origins of sci-fi itself. It’s a reaction to what in the 19th century was called ‘the Sublime’: the sense of awe and physical or psychological threat generated by, for example, impressive natural landscapes or ruins. The latter made authors and artists look fretfully at the unknowable chasm of the past and make up the Gothic genre: Dracula, inspired partly by Bram Stoker’s visits to the ruins of Whitby Abbey, is a notable example. Science fiction arose from the same instinct, but it’s the chasm of the future that inspired the awe and fear; and that’s inherently more scary, because it inevitably includes the author’s and the reader’s own deaths. It’s worth remembering that at exactly the same time as Bram Stoker was shivering in Whitby and coming up with Dracula, HG Wells was sitting in cosy Woking writing the archetypal future shock, The Time Machine.

Gary Oldman as Count Dracula. If this doesn't scare you, neither should AI
Gary Oldman as Count Dracula. If this doesn’t scare you, neither should AI

The point of that digression was to point out that what we fear in stories of AI isn’t machine intelligence. In the same way that what we fear in the malevolent ghosts and vampires in Gothic horror is ourselves with the constraints of mortality removed, in AI what we fear is ourselves – our leaders and our military – without the constraints of morality or distraction. It’s a fear of our ourselves, our own shadows.

And in the case of AI replacing humans in jobs or on the battlefield, sci-fi can act as a warning. Don’t be bloody stupid and remember what’s important is people. Always people.