Earlier this week I came across an infographic published by tech website Futurism. It’s based on the predictions of Ray Kurzweil, an American computer scientist and author who has written extensively about artificial intelligence. In his book The Singularity is Near, Kurzweil outlines a future of ever expanding computing power and automation, where robots and AI become increasingly embedded in our lives.
Kurzweil is now Director of Engineering at Google, where investment in AI and robotics has been relentless. The singularity that the title of his book refers to is the point in the future where AI surpasses humanity as the most capable form of intelligence on the planet. What happens then is a point of much speculation, and has been the premise for a wealth of dystopian science fiction. Will AI be a benevolent force, helping humanity achieve things never before imagined? Or will it exploit its dominance over the lesser intelligence and crush us like ants?
According to Kurzweil, the singularity will happen about halfway through this century. But before the inevitable enslavement/nuclear destruction by our machine overlords, we have perhaps a more immediate issue to contend with, and one that could help shape our future relationship with AI. Robots are becoming noticeably more pervasive in society, and automation is one of the key trends of our times. And while true artificial intelligence may still be some way off yet, autonomous machines are already here and influencing our interactions with technology.
Robotics and autonomous systems have been identified by the government as one of the Eight Great Technologies – areas of rapid innovation where the UK has an opportunity to lead. But according to robotics expert Prof Noel Sharkey, the amount being invested by the government pales in comparison to what tech giants like Google are doing. Sharkey was speaking at the launch of the Foundation for Responsible Robotics (FRR), a new multidisciplinary group of technology academics. Its aim is to engage policy makers and the public in a conversation about robotics and humanity’s role in its development.
Increasing automation has a range of complex social and ethical implications. Robots have applications across virtually every aspect of our lives, from healthcare and education, to farming, construction, policing, transport and service. One of the areas Sharkey and his colleagues from the FRR spoke extensively about is care. Japan, with its ageing population, dearth of migrant workers, and proclivity for technology, is investing billions in robotics for elderly care. These care robots cover a variety of functions, from companionship robots that replicate pets, to a robot ‘bear’ that assists in hoisting and lifting.
While these innovations undoubtedly carry some benefits, they also raise some ethical questions. If granddad is being looked after by a robot, does that mean we can just stay at home watching Netflix and not visit? If grandma has just got out of bed in her nightie, should a camera-equipped carebot be able to enter her room unannounced?
There is also a serious question of dignity. Being hoisted out of bed by a friendly looking robot bear might be fun on day one in your new care home, but one imagines the novelty factor could be fairly short-lived. The FRR is keen to point out that its role is not to stifle innovation, but to ensure that the ethical issues associated with these advances are given proper consideration, and that humans take responsibility for the robots that are being developed.
Jobs are of course another major area of concern. According to the International Federation of Robotics, there will be 31 million service robots in operation by 2018. The threat to jobs is significant, and the FRR can point to some examples to illustrate the pace of change. Truckers in the mines of Australia are beginning to be replaced by autonomous vehicles. While the owners of the mines would no doubt argue that it is a dangerous job and one that should be automated, the savings being made on labour are also enormous. How soon before bus, train and taxi companies follow suit?
Sharkey also gave the example of a chain of sushi restaurants in Japan (where else?) where automation is so advanced that only one manager is required to look after 11 locations. No waiters, just a small team of chefs preparing dishes according to a customer-based algorithm, with conveyor belts delivering the food. How long before the chefs themselves are replaced? Likewise the millions employed in fast food chains around the world.
It’s a brave new world, and many will see the loss of jobs as an inevitable consequence of progress. One can no more hold back the march of automation than the Luddites could the industrial revolution. But what we can do, and what the FRR is urging us to do, is to consider the ethical issues involved and incorporate a moral framework into the technology’s evolution. By doing that, we will perhaps have some control over the path along which AI develops, sowing the seeds for a future where man and machine co-exist in harmony.
At the launch, Prof Johanna Seibt from the FFR touched on some fascinating research. She said that studies have found that humans are prepared to lie to other humans in order to protect robots – empathy and anthropomorphism combining to powerful effect. According to Seibt, soldiers have even been known to risk their lives to save their robotic buddies. When the singularity arrives, and humans are no longer the alpha dogs on this planet, one wonders if the machines will empathise with us in quite the same way.