Shadows help robot AI gauge human touch

Researchers in the US have equipped a robot with an AI-driven vision system that enables it to recognise different types of touch via shadows.

The team from Cornell University installed a USB camera inside a soft, deformable robot that captures the shadow movements of hand gestures on the robot’s skin and classifies them using machine-learning software. Known as ShadowSense, the technology evolved from a project to create inflatable robots that could help guide people to safety during emergency evacuations, for example through a smoke-filled building, where the robot could detect the touch of a hand and lead the person to an exit.

C2I 2020 Wildcard winner: Magic touch

New technology sends information through human touch

Rather than installing a large number of contact sensors, which would add weight, complex wiring and would be difficult to embed in a deforming skin, the Cornell team took a counterintuitive approach and instead looked to computer vision.

“By placing a camera inside the robot, we can infer how the person is touching it and what the person’s intent is just by looking at the shadow images,” said lead author of the research paper Yuhan Hu, a doctoral student who works at the university’s Human-Robot Collaboration and Companionship Lab.

“We think there is interesting potential there, because there are lots of social robots that are not able to detect touch gestures.”

The prototype robot consists of a soft inflatable bladder of nylon skin stretched around a cylindrical skeleton just over a metre high, mounted on a mobile base. Under the robot’s skin is a USB camera, which connects to a laptop. The researchers developed a neural-network-based algorithm that uses previously recorded training data to distinguish between the shadows of six touch gestures – touching with a palm, punching, touching with two hands, hugging, pointing and not touching at all – with an accuracy of 87.5 to 96 per cent, depending on the lighting.

The system can be programmed to respond to certain touches and gestures, such as rolling away or issuing a message through a loudspeaker, and the robot’s skin also has the potential to be upgraded to an interactive screen. Using shadows and touch as the means of interaction also addresses some of the privacy concerns around voice and facial recognition.

“If the robot can only see you in the form of your shadow, it can detect what you’re doing without taking high fidelity images of your appearance,” Hu said. “That gives you a physical filter and protection and provides psychological comfort.”