AI gives robot a push to identify and remember objects

Artificial intelligence is being employed by UT Dallas computer scientists to help robots better identify and remember objects.

UT Dallas researchers in the Intelligent Robotics and Vision Lab developed a new approach to train Ramp, a Fetch Robotics mobile manipulator robot, to recognise objects through repeated interactions
UT Dallas researchers in the Intelligent Robotics and Vision Lab developed a new approach to train Ramp, a Fetch Robotics mobile manipulator robot, to recognise objects through repeated interactions - University of Texas at Dallas

The new system allows the robot to push objects multiple times until a sequence of images are collected, which enables the system to segment all the objects in the sequence until the robot recognises the objects. According to UT Dallas, previous approaches have relied on a single push or grasp by the robot to ‘learn’ the object.

The team presented its research paper at the Robotics: Science and Systems conference July 10-14 in Daegu, South Korea.

“If you ask a robot to pick up the mug or bring you a bottle of water, the robot needs to recognise those objects,” said Dr Yu Xiang, senior author of the paper and assistant professor of computer science in the Erik Jonsson School of Engineering and Computer Science.

The UTD researchers’ technology is designed to help robots detect a variety of objects and to generalise similar versions of common items, such as water bottles that come in different shapes and sizes.

Toy packages of common foods, including spaghetti, ketchup and carrots, are being used to train the lab robot, named Ramp.

Ramp is a Fetch Robotics mobile manipulator robot that stands on a round mobile platform and is fitted with a long mechanical arm with seven joints and a two-fingered end effector to grasp objects.

MORE FROM ROBOTICS & UAVs

Xiang said robots learn to recognise items in a comparable way to how children learn to interact with toys.

“After pushing the object, the robot learns to recognise it,” Xiang said in a statement. “With that data, we train the AI model so the next time the robot sees the object, it does not need to push it again. By the second time it sees the object, it will just pick it up.”

What is new about the researchers’ method is that the robot pushes each item 15 to 20 times, while the previous interactive perception methods only use a single push. Xiang said multiple pushes enable the robot to take more photos with its RGB-D camera, which includes a depth sensor, to learn about each item in more detail. This reduces the potential for mistakes.

The task of recognising, differentiating and remembering objects – segmentation - is one of the primary functions needed for robots to complete tasks.

“To the best of our knowledge, this is the first system that leverages long-term robot interaction for object segmentation,” said Xiang.

The next step for the researchers is to improve other functions, including planning and control, which could enable tasks such as sorting recycled materials.