US engineers develop language-learning robot

Engineers at Purdue University in the US are developing technology that could enable robots to learn human language.

Jeffrey Siskind, associate professor in Purdue’s School of Electrical and Computer Engineering, with his group’s language learning robot

The group, led by associate professor Jeffrey Mark Siskind, has developed three algorithms that allow a wheeled robot to learn the meanings of words, to use words to generate a sentence, and to comprehend sentences.

The team took a small-wheeled robot outfitted with several cameras and ran numerous trials on an enclosed course containing several objects such as a chair, a traffic cone and a table.

Sentences describing the path for the robot to take were provided by anonymous online sources. An operator then steered the robot to follow the paths described by the sentences.

Using the algorithms, the robot was able to recognise words associated with objects within the course and words associated with directions of travel based on its sensory data.

“It was able to generate its own sentences to describe the paths it had taken. It was also able to generate its own sentences to describe a separate path of travel on the same course,” said Siskind. “The robot aggregated its sensory data over numerous experiences.”

By learning the meaning of various words, the robot took a step above conventional autonomous vehicles that work by using a vehicle’s electrical system to control driving based on a computerised map of existing roads. They also include cameras and various other sensors to detect potential hazards such as stoplights, pedestrians and the edges of the road.

But current autonomous vehicles can’t recognise everyday landmarks off the road based on the sensory data. Nor can they associate words with the objects.

Siskind compares it to searching for a video on the internet.

“You can search for a video of something you want to see online. But the search engine is not actually searching for the video. It’s searching for the captions under it, the words used to describe it,” he said. “What we’re doing with our research is actually recognising what’s going on in the video.”

The team is now working on scaling up the robot and giving it the ability to handle a wider range of situations and recognise a greater variety of words and phrases.

“It’s our hope that this technology can be applied to a host of applications in the future, potentially including autonomous vehicles,” said Siskind.