Adaptable sign language

Researchers in Spain have developed a visual-interpretation system that allows the deaf and hard of hearing to communicate more effectively.

Spanish sign language is used by over 100,000 people with hearing impairments and is made up of hundreds of signs.

Sergio Escalera, Petia Radeva and Jordi Vitrià from the Computer Vision Centre at the Universitat Autònoma de Barcelona (CVC-UAB) selected over 20 of these signs to develop a visual-interpretation system that allows deaf people to carry out consultations in the language they commonly use.

Signs can vary slightly depending on each user. Project researchers took this into account during the trials carried out with different people to help the system ’become familiarised’ with this variability. The signs recognised by the system were programmed to allow deaf people to maintain a basic conversation, including asking for help or directions.

’For them it is a non-artificial way of communicating and at the same time they can engage with people who do not speak sign language, since the system translates the symbols into words in real time,’ said Escalera.

The hardware includes a video camera that records image sequences when it detects the presence of a user wanting to make a consultation. A computer vision and automatic learning system detects face, hand and arm movements, as well as any screen scrolling, and incorporates these into a classification system that identifies each movement with the word associated with the sign.

The system, to be incorporated into information points, is claimed to adapt to any other sign language, since the methodology used is general. The system would only need to be reprogrammed with the signs used in that specific language. The amount of signs the system can recognise is also said to be scalable, although researchers do admit that new data will increase the difficulty in differentiating them.

Accuracy

Applications such as the one developed by CVC-UAB researchers require extreme precision in the identification phase and are very difficult to configure, given that the surroundings in which they will be used include changes in light and shadow, different physiognomies (facial features and expressions) and speeds at which the signs are formed.

Other similar projects have been developed in the past. However, most of them failed or were not reliable enough because of the high complexity of variables in uncontrolled surroundings. For this project to succeed, it was necessary to establish a fixed point in which individuals formed the signs and avoid having different focus points when recording.

The system was recently presented as a prototype in the final phase of a European project and researchers are already working on new project phases, such as using two cameras with the aim of recognising even more complex signs and complementing information with facial characteristics. To carry this out, researchers worked in close collaboration with several members of the Catalan Federation of Deaf People, FECOSA.