Intelligence test

Safe interaction with humans means robots will need to have a ‘humanised’ social intelligence, says Prof Chris Melhuish

We are now seeing an ineluctable growth of integration of embedded intelligence within mechatronic systems. Indeed, this melange of hardware and software makes the definition of what constitutes a robotic system somewhat fuzzy around the edges and an interesting philosophical issue. The era of the 'Metal Mickey' image of a robot is well and truly over and, with the caveat that clearly some areas are more mature than others, we are now seeing intelligence in a huge range of actuating devices.



Examples include washing machines, cars, smart tele-operated surgical devices, intelligent missile systems, space exploration machines, submersible rovers, rehabilitation systems and assistive care robots. Moreover, the degree of autonomy and supervision afforded to these devices is also variable, and one can classify robotic devices into two broad categories: those which engage with humans in their own physical workspace and those which do not.



Furthermore, terms such as 'internet of things' and 'cybersystems' are now being bandied around — ways to underpin the necessity for smart devices to 'join up their thinking'.



A considerable amount of first-class robotics research has been concentrated on intelligent automation and control, and it is relatively recently that human interaction with robots in the form of co-operative or supportive tasks, carried out in a shared physical workspace, has been a focus of activity for researchers.



Although these are early days and the science and engineering is immature, examples of robots in this area include those which help the infirm in their own home, support individuals undergoing rehabilitation therapy and assist us assembling or repairing objects.



Two closely coupled issues need to be addressed: first, robots working in co-located space will need to be powerful to be useful, but they must also be safe and, second, robots will need to be 'mind-readers'.



Powerful robots can be made less dangerous by making them lighter, therefore reducing momentum. They will need to avoid collisions, be intelligently compliant and maybe built from self-sacrificing sub-assemblies (the robot's arm snaps before yours does!) — all non-trivial requirements. However, this is not enough. They will also need to have 'social intelligence' to make interaction with them 'natural' and, importantly, safe. When you interact with another human, who is helping you (or vice versa), you make use of communication modes, which include gesture, eye gaze direction, non-language utterances, facial expression and body/head pose; robots will need, like you, to do the same both as receiver and transmitter.



One of the aims of the

Bristol Robotics Laboratory

is to create assistive robotic devices that do not require expert skills to control; the interface must be easy and natural. However, the paradox is that the more easy and natural we want the interaction to feel, the more intelligent and sophisticated the machine must be. A significant challenge is, therefore, to create devices with social intelligence capable of interacting with the world and the people in it.



Of course, explicit language can be used, but it seems a great deal of 'back-channel' communication is employed too. One could make a case that these modalities provide the 'handshakes' needed for task synchrony and to confirm the partial and sufficient mental models we require of each other in an interaction — making a pragmatic solution to the 'I know that you know that I know' expectation model. Such a model is vital for safe interaction. Imagine two people who don't share a common language trying to move a sofa up a stairwell.



Robots are the sum of their many and complex parts that execute their functions in a harmonious synchrony, and as they become more sophisticated, more demands on processing, control and intra-subsystem communications are needed. Underlying architectures that can support such bandwidth and, importantly, scalability will be a challenge for future engineers.



Research into safe human-robot interaction is therefore a multi-disciplinary activity requiring skills and knowledge from a large number of disciplines other than 'traditional' engineering (one naturally thinks of mechanical and electronic engineering) such as cognitive science, developmental anthropology, languages, material science and neuroscience. Modern robotics requires an all-inclusive approach. It is perhaps not stretching the point too much to consider that the construction of the artificial informs us on what it means for us to be human beings.




Chris Melhuish is Professor of Robotics and Autonomous Systems at the University of Bristol and also Professor of Intelligent Autonomous Systems at the University of the West of England. He is the director and founding member of the Bristol Robotics Laboratory