Researchers at UCLA have designed a glove-like device that can translate American Sign Language into English speech in real time though a smartphone app.
Their research is published in the journal Nature Electronics.
“Our hope is that this opens up an easy way for people who use sign language to communicate directly with non-signers without needing someone else to translate for them,” said Jun Chen, an assistant professor of bioengineering at the UCLA Samueli School of Engineering and the principal investigator on the research. “In addition, we hope it can help more people learn sign language themselves.”
The system is said to include a pair of gloves with thin, stretchable sensors that run the length of each of the five digits. The sensors are made from electrically conducting yarns that pick up hand motions and finger placements that stand for individual letters, numbers, words, and phrases.
According to UCLA, the device then turns the finger movements into electrical signals, which are sent to a small circuit board worn on the wrist. The board transmits those signals to a smartphone that translates them into spoken words at the rate of about a one word per second.
Facial expressions are part of American Sign Language, so to pick them up the team added adhesive sensors to testers’ faces between the eyebrows and on one side of their mouths.
Previous wearable systems that offered translation from American Sign Language were limited by bulky and heavy device designs or were uncomfortable to wear, Chen said in a statement.
The device is made from lightweight and inexpensive but long-lasting, stretchable polymers. The electronic sensors are also very flexible and inexpensive.
In testing the device, the researchers worked with four people who are deaf and use American Sign Language. The wearers repeated each hand gesture 15 times. A custom machine-learning algorithm turned these gestures into the letters, numbers, and words they represented.
By analysing 660 acquired sign language hand gesture recognition patterns, the team demonstrated a recognition rate of up to 98.63 per cent and a recognition time of under one second.