Video systems enable robot to anticipate human behaviour
A robot in Cornell University’s Personal Robotics Lab has learned to predict human action in order to step in and offer to help.

Understanding when and where to pour a drink or knowing when to offer assistance opening a door can be difficult for a robot because of the many variables it encounters while assessing the situation but a team from Cornell believes it has created a solution.
Using a Microsoft Kinect 3D camera and a database of 3D videos, the Cornell robot identifies the activities it sees, considers what uses are possible with the objects in the scene and determines how those uses fit with the activities.
It then generates a set of possible continuations into the future – such as eating, drinking, cleaning, putting away – and chooses what it believes is the most probable. As the action continues, the robot constantly updates and refines its predictions.
‘We extract the general principles of how people behave,’ said Ashutosh Saxena, Cornell professor of computer science and co-author of a new study tied to the research. ‘Drinking coffee is a big activity, but there are several parts to it.’
Register now to continue reading
Thanks for visiting The Engineer. You’ve now reached your monthly limit of news stories. Register for free to unlock unlimited access to all of our news coverage, as well as premium content including opinion, in-depth features and special reports.
Benefits of registering
-
In-depth insights and coverage of key emerging trends
-
Unrestricted access to special reports throughout the year
-
Daily technology news delivered straight to your inbox
New IET report examines grid transmission costs
In the rural East Midlands, the countryside is criss-crossed with power lines, due to the legacy of Coal Fired Power Stations built every few miles...