The gripper consists of three fingers, each with three pneumatically-activated chambers, allowing multiple degrees of movement when air pressure is applied. In addition, each finger is covered with a smart, sensing skin made of silicone rubber embedded with conducting carbon nanotubes. Sheets of the rubber are slipped over the flexible fingers to cover them like skin.
As the fingers flex, the conductivity of the nanotubes changes, allowing the skin to record and detect when the fingers are moving and coming into contact with an object. The data the sensors generate is transmitted to a control board, which collates the information to create a 3D model of the object. According to the UC San Diego team, it’s a process similar to a CT scan, where 2D image slices from a 3D picture.
(Credit: UC San Diego)
Researchers tested the gripper on an industrial Fetch Robotics robot and demonstrated that it could pick up, manipulate and model a wide range of objects, from light bulbs to screwdrivers. As it uses embedded sensors to build the 3D models of the objects it’s gripping, the device can operate in low light and low visibility conditions.
"We designed the device to mimic what happens when you reach into your pocket and feel for your keys," said Michael T Tolley, a roboticist at UC San Diego’s Jacobs School of Engineering who led the research team.
(Credit: UC San Diego)
The next stages of development will see machine learning and artificial intelligence incorporated into the data processing. This will allow the robotic gripper to identify the objects it's manipulating instead of simply modelling them. According to the researchers, they are also investigating using 3D printing for the gripper's fingers to make them more durable.
Heat network to set bar for decarbonising heat
The report is somewhat outdated and seems to give emphasis to hydrogen and carbon capture. Thermal energy sources do not include those for electricity...