Robot shows ‘close to human-level dexterity’

A bimanual robot developed in Bristol displays tactile sensitivity close to human-level dexterity using AI to inform its actions.

Dual arm robot holding crisp
Dual arm robot holding crisp - Yijiong Lin

Designed by scientists at Bristol University and based at the Bristol Robotics Laboratory, the new Bi-Touch system allows robots to carry out manual tasks by sensing what to do from a digital helper.

The findings, published in IEEE Robotics and Automation Letters, show how an AI agent interprets its environment through tactile and proprioceptive feedback, and then control the robots' behaviours, enabling precise sensing, gentle interaction, and effective object manipulation to accomplish robotic tasks.

This development could be applied to tasks including fruit picking and eventually recreate touch in artificial limbs.

MORE FROM ROBOTICS & UAVs

In a statement, lead author Yijiong Lin from the Faculty of Engineering said: “With our Bi-Touch system, we can easily train AI agents in a virtual world within a couple of hours to achieve bimanual tasks that are tailored towards the touch. And more importantly, we can directly apply these agents from the virtual world to the real world without further training.

“The tactile bimanual agent can solve tasks even under unexpected perturbations and manipulate delicate objects in a gentle way.”

Bimanual manipulation with tactile feedback will be key to human-level robot dexterity, but this topic is less explored than single-arm settings, partly due to the availability of suitable hardware along with the complexity of designing effective controllers for tasks with relatively large state-action spaces.

Recent advances in AI and robotic tactile sensing enabled the team to develop a tactile dual-arm robotic system. To do this, the researchers built a simulation that contained two robot arms equipped with tactile sensors. They then designed reward functions and a goal-update mechanism that could encourage the robot agents to learn to achieve the bimanual tasks and developed a real-world tactile dual-arm robot system to which they could directly apply the agent.

The robot learns bimanual skills through Deep Reinforcement Learning, one of the most advanced techniques in the field of robot learning. It is designed to teach robots to do things by letting them learn from trial and error.

For robotic manipulation, the robot learns to make decisions by attempting various behaviours to achieve designated tasks, such as lifting objects without dropping or breaking them. When it succeeds, it gets a reward, and when it fails, it learns what not to do. The AI agent is visually blind relying only on proprioceptive feedback, which is a body’s ability to sense movement, action and location and tactile feedback.

Co-author Professor Nathan Lepora said: “Our Bi-Touch system showcases a promising approach with affordable software and hardware for learning bimanual behaviours with touch in simulation, which can be directly applied to the real world. Our developed tactile dual-arm robot simulation allows further research on more different tasks as the code will be open-source, which is ideal for developing other downstream tasks.”