A robot in Cornell University’s Personal Robotics Lab has learned to predict human action in order to step in and offer to help.
Understanding when and where to pour a drink or knowing when to offer assistance opening a door can be difficult for a robot because of the many variables it encounters while assessing the situation but a team from Cornell believes it has created a solution.
Using a Microsoft Kinect 3D camera and a database of 3D videos, the Cornell robot identifies the activities it sees, considers what uses are possible with the objects in the scene and determines how those uses fit with the activities.
It then generates a set of possible continuations into the future – such as eating, drinking, cleaning, putting away – and chooses what it believes is the most probable. As the action continues, the robot constantly updates and refines its predictions.
‘We extract the general principles of how people behave,’ said Ashutosh Saxena, Cornell professor of computer science and co-author of a new study tied to the research. ‘Drinking coffee is a big activity, but there are several parts to it.’
According to the university, the robot builds a ‘vocabulary’ of such small parts that it can put together in various ways to recognise a variety of big activities, he explained.
In tests, the robot made correct predictions 82 per cent of the time when looking one second into the future, 71 per cent correct for three seconds and 57 per cent correct for 10 seconds.
‘Even though humans are predictable, they are only predictable part of the time,’ Saxena said in a statement. ‘The future would be to figure out how the robot plans its action. Right now we are almost hard-coding the responses, but there should be a way for the robot to learn how to respond.’
Saxena will join Cornell graduate student Hema S. Koppula as they present their research at the International Conference of Machine Learning, June 18-21 in Atlanta, and the Robotics: Science and Systems conference June 24-28 in Berlin, Germany.