Imaging system tech such as LiDAR and cameras in autonomous applications could be replaced by a new real-time, 3D motion tracking system developed at the University of Michigan.
The technology is said to combine transparent light detectors with advanced neural network and promises to find applications in automated manufacturing, biomedical imaging and autonomous driving. A paper on the system is published in Nature Communications.
The imaging system exploits the advantages of transparent, nanoscale, highly sensitive graphene photodetectors developed by Zhaohui Zhong, U-M associate professor of electrical and computer engineering, and his group.
“The in-depth combination of graphene nanodevices and machine learning algorithms can lead to fascinating opportunities in both science and technology,” said Dehui Zhang, a doctoral student in electrical and computer engineering. “Our system combines computational power efficiency, fast tracking speed, compact hardware and a lower cost compared with several other solutions.”
The graphene photodetectors in this work have been adjusted to absorb about 10 per cent of the light they are exposed to. Because graphene is so sensitive to light, this is sufficient to generate images that can be reconstructed through computational imaging. The photodetectors are stacked behind each other, resulting in a compact system, and each layer focuses on a different focal plane, which enables 3D imaging.
As well as 3D imaging, the team also tackled real-time motion tracking. To do this, they needed a way to determine the position and orientation of an object being tracked. Typical approaches involve LiDAR systems and light-field cameras, both of which suffer from significant limitations, the researchers said.
According to U-M, others use metamaterials or multiple cameras, but hardware alone does not produce the desired results without the introduction of deep learning algorithms. Zhen Xu, a doctoral student in electrical and computer engineering, built the optical setup and worked with the team to enable a neural network to decipher the positional information.
The neural network is trained to search for specific objects in the entire scene, and then focus only on the object of interest. The technology is said to work particularly well for stable systems, such as automated manufacturing, or projecting human body structures in 3D for the medical community.
“It takes time to train your neural network,” said project leader Ted Norris, professor of electrical and computer engineering. “But once it’s done, it’s done. So when a camera sees a certain scene, it can give an answer in milliseconds.”
Doctoral student Zhengyu Huang led the algorithm design for the neural network. The type of algorithms the team developed are unlike traditional signal processing algorithms used for long-standing imaging technologies such as X-ray and MRI.
“In my 30 years at Michigan, this is the first project I’ve been involved in where the technology is in its infancy,” said Jeffrey Fessler, professor of electrical and computer engineering, who specialises in medical imaging. “We’re a long way from something you’re going to buy at Best Buy, but that’s OK. That’s part of what makes this exciting.”
The team demonstrated success tracking a beam of light, as well as a ladybird with a stack of two 4×4 (16 pixel) graphene photodetector arrays. They also proved that their technique is scalable. They believe it would take as few as 4,000 pixels for some practical applications, and 400×600 pixel arrays for many more.
While the imaging system technology could be used with other materials, additional advantages to graphene are that it does not require artificial illumination and it is environmentally friendly.
“Graphene is now what silicon was in 1960,” Norris said in a statement. “As we continue to develop this technology, it could motivate the kind of investment that would be needed for commercialisation.”