Robots to mimic human dexterity for better grasp of objects

UK researchers will teach robots how to handle delicate and irregular objects, expanding their use within sectors such as the food and drink and consumer electronics industries.

objects
Humans are adept at using their vision and sense of touch to adapt their grip

The use of robots in global manufacturing has grown dramatically over the last decade. However, despite this rise, robots are still typically limited to sectors such as the car industry, where they are used to carry out simple, repetitive tasks with solid components in carefully controlled conditions.

Now a team of UK researchers, funded by EPSRC, is hoping to teach robots to handle even delicate and irregular objects, expanding their use within sectors such as the food and drink and consumer electronics industries.

The researchers, led by Dr Lorenzo Jamone at Queen Mary University of London, are developing a system based on virtual reality technologies and smart wearable devices, to allow robots to learn manipulation techniques simply by mimicking human demonstrators.

“Robots in industry today mainly deal with simple objects, so for example objects that are rigid, or those that are all identical, which allows them to use an analytical model,” said Jamone. “Alternately, robots are used for simple tasks such as picking and placing an object from the same position to another known position each time,” he said.

Humans, in contrast, are adept at using their vision and sense of touch to adapt their grip when confronted with objects of a different shape or texture, or to perform different operations.

In order for robots to learn this skill from humans, the researchers plan to use teleoperation technologies, allowing the users to move a robotic hand just by moving their own. The human demonstrators will be equipped with gloves fitted with sensors, allowing the robot hand to detect and replicate their movements.

At the same time, the human will receive haptic feedback on what the robot is touching at any given moment.

Meanwhile, virtual reality goggles will allow the human to “see” through the robot’s 3D vision system.

Human demonstrators will reduce the amount of time it takes for robots to learn manipulation skills

“The system will try to convey all of the sensory information to which the robot has access directly to the human, and give them the ability to move the robot,” said Jamone. “In this way they can transfer their intelligence - in terms of what motions should be used to achieve certain manipulations – to the robot.”

The robot will also be equipped with artificial intelligence algorithms to allow it to learn from the demonstrations, and from its own sensory information, he said.

The idea is that by using humans as demonstrators it will reduce the amount of time it takes for robots to learn these manipulation skills by themselves, by providing them with “hints” that should cut down on the number of possible motions they need to try out.

The project, which includes robotic hand developer Shadow Robot Company, AI specialist DeepMind Technologies, and online grocer Ocado, should ultimately allow robots to develop intrinsic strategies for handling items such as food, said Jamone.

“We might teach the robot how to pick a banana and an apple, for example, and then it will be able to generalise that knowledge to decide how to pick up pears or avocado, because it will have learnt some consistent aspects of the fruits’ structure,” he said.

Although the project is focused on manipulating food items, the approach could ultimately help to develop adaptable robots for other applications, such as nuclear decommissioning, remote infrastructure inspection, and space or deep sea exploration.

“These are all situations where a robot might need to manipulate objects, but you don’t know what kind of objects it will face, or their properties, so you need a way for the robot to learn from tactile and visual feedback, and perhaps remotely from a human,” said Jamone.

CLICK FOR NEWS