Metasurface and algorithms focus images for micro-camera

2 min read

A novel optical surface and signal processing algorithms have been combined to develop a micro-camera that could one day be employed by medical robots to conduct minimally invasive endoscopy procedures. 

(Image: Princeton University)

Researchers at Princeton University and the University of Washington have overcome fuzziness, distortion and limited fields of view associated with previous micro-cameras to produce an ultra-compact camera the size of a grain of salt. The new system is claimed to produce full-colour images on par with a conventional compound camera lens. The researchers have reported their findings in Nature Communications.

Enabled by a joint design of the camera’s hardware and computational processing, the system could enable minimally invasive endoscopy with medical robots to diagnose and treat diseases, and improve imaging for other robots with size and weight constraints. Arrays of thousands of such cameras could be used for full-scene sensing, turning surfaces into cameras.

The new optical system uses a half a millimetre wide metasurface covered with 1.6 million cylindrical posts. Each post has a unique geometry, and functions like an optical antenna. Varying the design of each post is necessary to correctly shape the entire optical wavefront. Using machine learning-based algorithms, the posts’ interactions with light combine to produce the highest-quality images and widest field of view for a full-colour metasurface camera developed to date.

Princeton metasurface could unlock terahertz comms

A key innovation in the camera’s creation was the integrated design of the optical surface and the signal processing algorithms that produce the image. This boosted the camera’s performance in natural light conditions, in contrast to previous metasurface cameras that required the pure laser light of a laboratory or other ideal conditions to produce high-quality images, said Felix Heide, the study’s senior author and an assistant professor of computer science at Princeton.

The researchers compared images produced with their system to the results of previous metasurface cameras, as well as images captured by a conventional compound optic that uses a series of six refractive lenses. Aside from a bit of blurring at the edges of the frame, the nano-sized camera’s images were comparable to those of the traditional lens setup.

“It’s been a challenge to design and configure these little microstructures to do what you want,” said Ethan Tseng, a computer science PhD student at Princeton who co-led the study. “For this specific task of capturing large field of view RGB images, it’s challenging because there are millions of these little microstructures, and it’s not clear how to design them in an optimal way.”

Co-lead author Shane Colburn tackled this challenge by creating a computational simulator to automate testing of different nano-antenna configurations. Because of the number of antennas and the complexity of their interactions with light, this type of simulation can use “massive amounts of memory and time,” said Colburn. He developed a model to efficiently approximate the metasurfaces’ image production capabilities with sufficient accuracy.

Co-author James Whitehead, a PhD student at UW ECE, fabricated the metasurfaces, which are based on silicon nitride. This material is compatible with standard semiconductor manufacturing methods, enabling mass-production at a lower cost than lenses in conventional cameras.

Heide and colleagues are now working to add more computational abilities to the camera. Beyond optimising image quality, they would like to add capabilities for object detection and other sensing modalities relevant for medicine and robotics.