3D-mapping technique could improve navigation for autonomous vehicles

In a potential advance for autonomous vehicles, researchers have developed a technique that allows AI programs to better map 3D spaces using 2D images captured by multiple cameras.

AdobeStock

Because the technique works effectively with limited computational resources, it holds promise for improving the navigation of autonomous vehicles.

“Most autonomous vehicles use powerful AI programs called vision transformers to take 2D images from multiple cameras and create a representation of the 3D space around the vehicle,” said Tianfu Wu, corresponding author of a paper on the work and an associate professor of electrical and computer engineering at North Carolina State University. “However, while each of these AI programs takes a different approach, there is still substantial room for improvement.

Wu continued: “Our technique, called Multi-View Attentive Contextualization [MvACon], is a plug-and-play supplement that can be used in conjunction with these existing vision transformer AIs to improve their ability to map 3D spaces. The vision transformers aren’t getting any additional data from their cameras, they’re just able to make better use of the data.”

MvACon modifies Patch-to-Cluster attention (PaCa), which Wu and his collaborators released last year. PaCa allows transformer AIs to more efficiently and effectively identify objects in an image.

Register now to continue reading

Thanks for visiting The Engineer. You’ve now reached your monthly limit of news stories. Register for free to unlock unlimited access to all of our news coverage, as well as premium content including opinion, in-depth features and special reports.  

Benefits of registering

  • In-depth insights and coverage of key emerging trends

  • Unrestricted access to special reports throughout the year

  • Daily technology news delivered straight to your inbox