MonoXiver helps AI turn 2D into 3D imagery for autonomous vehicle cameras

In an advance for autonomous vehicle cameras, researchers at NC State have developed MonoXiver, a new method to help AI extract 3D information from 2D images.

AdobeStock

According to Tianfu Wu, co-author of a paper on the work, existing techniques for extracting 3D information from 2D images are good, but not good enough.

“Our new method…can be used in conjunction with existing techniques and makes them significantly more accurate,” said Wu, an associate professor of electrical and computer engineering at NC State.

Cameras are less expensive than other tools used to navigate 3D spaces, such as LIDAR that relies on lasers to measure distance. Designers of autonomous vehicles can install multiple cameras to build redundancy into the system, but that is only useful if the AI in the autonomous vehicle can extract 3D navigational information from the 2D images taken by a camera.

Existing techniques that extract 3D data from 2D images – such as the MonoCon technique developed by Wu and his collaborators – make use of so-called ‘bounding boxes’. These techniques train AI to scan a 2D image and place 3D bounding boxes around objects in the 2D image, such as each car on a street. These boxes are cuboids with eight points. The bounding boxes help the AI estimate the dimensions of the objects in an image, and where each object is in relation to other objects.

However, the bounding boxes of existing programs are imperfect, and often fail to include parts of a vehicle or other object that appears in a 2D image.

According to NC State, the new MonoXiver method uses each bounding box as a starting point (anchor) and has the AI perform a second analysis of the area surrounding each bounding box. This second analysis results in the program producing many additional bounding boxes surrounding the anchor.

To determine which of these secondary boxes has best captured any ‘missing’ parts of the object, the AI does two comparisons. One comparison looks at the ‘geometry’ of each secondary box to see if it contains shapes that are consistent with the shapes in the anchor box. The other comparison looks at the ‘appearance’ of each secondary box to see if it contains colours or other visual characteristics that are similar to the visual characteristics of what is within the anchor box.

“One significant advance here is that MonoXiver allows us to run this top-down sampling technique – creating and analysing the secondary bounding boxes – very efficiently,” Wu said in a statement.

To measure the accuracy of the MonoXiver method, the researchers tested it using two datasets of 2D images: the KITTI dataset and the Waymo dataset.

“We used the MonoXiver method in conjunction with MonoCon and two other existing programs that are designed to extract 3D data from 2D images, and MonoXiver significantly improved the performance of all three programs,” said Wu. “We got the best performance when using MonoXiver in conjunction with MonoCon.

“It’s also important to note that this improvement comes with relatively minor computational overhead,” said Wu. “For example, MonoCon, by itself, can run at 55 frames per second. That slows down to 40 frames per second when you incorporate the MonoXiver method – which is still fast enough for practical utility.

“We are excited about this work, and will continue to evaluate and fine-tune it for use in autonomous vehicles and other applications,” Wu says.

The team’s paper, Monocular 3D Object Detection with Bounding Box Denoising in 3D by Perceiver, will be presented on October 4, 2023, at the International Conference on Computer Vision in Paris, France.