Avatars in school to help hearing impaired students

Two separate research projects in the US have led to the development of advanced educational avatars designed to aid hearing impaired students.

One project, led by Ron Cole at the University of Colorado, has resulted in the creation of Baldi, a 3D computerised tutor that may enable deaf children to develop their conversational skills, whilst Edward Sims and Carol Wideman of VCom3D have developed a pool of Internet enabled SigningAvatars that translate English into sign language.

Both projects were funded by the US National Science Foundation (NSF).

Baldi is an animated instructor that converses via the latest technologies for speech recognition and generation, showing students how to understand and produce spoken language.

Baldi’s 3D animation (including articulated mouth, teeth and tongue) is said to produce facial movements that are synchronised to its audible speech, which can be either a recorded human voice or computer-generated sounds.

As a virtual being, Baldi is said to allow students to study the ways that subtle facial movements produce desired sounds.

This project is the first to integrate emerging language technologies to create an animated conversational agent, and to apply this agent to learning and language training, said Cole.

To create Baldi’s speech recognition capabilities, the researchers compiled a database of speech from more than 1,000 children. Those samples then shaped an algorithm for recognising fine details in the children’s speech. Additionally, the animated speech produced by Baldi from textual input is said to be accurate enough to be intelligible to users who read lips.

The SigningAvatars developed by Sims and Wideman translate English into sign language to help deaf and hard-of-hearing children develop language and reading skills.

The SigningAvatars include digital teenagers with unique personalities, such as Andy and 13-year-old Tonya and a cyber-lizard named Pete.

Besides translating printed text, they ‘tell’ stories, ask follow-up questions and hold interactive conversations with viewers. Their vocabulary is said to include more than 3,500 words in English and in Conceptually Accurate Signed English, which includes elements of American Sign Language.

The characters interpret words, sentences and complicated concepts into sign language, combining signing, gestures and body language to simulate natural communication.

‘Lots of educational software teaches through voice communication,’ said Sara Nerlove, NSF’s program manager. ‘This is one of the first compelling uses of computer animation technology to benefit an audience with hearing loss, which sometimes struggles with conventional education systems.’