Algorithm reconstructs hyperspectral images with less data

Researchers from North Carolina State University and the University of Delaware have developed an algorithm that reconstructs hyperspectral images using less data.

Images at wavelengths from 470nm to 632nm within image cubes reconstructed by the new algorithm and another state-of-art algorithm for the LEGO image cube. The top row represents the ground truth; the middle row shows the output of the new algorithm; and the bottom row shows the output of the other algorithm
Images at wavelengths from 470nm to 632nm within image cubes reconstructed by the new algorithm and another algorithm for the LEGO image cube. The top row represents the ground truth; the middle row shows the output of the new algorithm; and the bottom row shows the output of the other algorithm

According to NC State, the images are created using instruments that capture hyperspectral information succinctly, and the combination of algorithm and hardware makes it possible to acquire hyperspectral images in less time and to store those images using less memory.

Hyperspectral imaging is said to hold promise for use in fields ranging from security and defence to environmental monitoring and agriculture.

Conventional imaging techniques, such as digital photography, capture images across three wavelengths – frequencies – of light, from blue to green to red, whereas hyperspectral imaging creates images across dozens or hundreds of wavelengths. These images can be used to determine the materials found in whatever scene was imaged.

However, in a conventional imaging system, if an image has millions of pixels across three wavelengths, the image file might be one megabyte. But in hyperspectral imaging, the image file could be at least an order of magnitude larger, creating problems for storing and transmitting data.

Additionally, capturing hyperspectral images across dozens of wavelengths can be time-consuming; with the conventional imaging technology taking a series of images, each capturing a different suite of wavelengths, and then combining them.

“It can take minutes,” said Dror Baron, an assistant professor of electrical and computer engineering at NC State and one of the senior authors of a paper describing the new algorithm.

In recent years, researchers have developed new hyperspectral imaging hardware that can acquire the necessary images more quickly and store the images using less memory. The hardware takes advantage of compressive measurements, which mix spatial and wavelength data in a format that can be used later to reconstruct the complete hyperspectral image.

In order for the new hardware to work effectively, an algorithm is required that can reconstruct the image accurately and quickly, a feat achieved by researchers at NC State and Delaware.

In model testing, the new algorithm is said to have significantly outperformed existing algorithms at every frequency.

“We were able to reconstruct image quality in 100 seconds of computation that other algorithms couldn’t match in 450 seconds,” said Baron. “And we’re confident that we can bring that computational time down even further.”

According to NC State, the higher quality of the image reconstruction means that fewer measurements need to be acquired and processed by the hardware, speeding up the imaging time. And fewer measurements mean less data that needs to be stored and transmitted.

“Our next step is to run the algorithm in a real world system to gain insights into how the algorithm functions and identify potential room for improvement,” said Baron. “We’re also considering how we could modify both the algorithm and the hardware to better compliment each other.”

The paper, “Compressive Hyperspectral Imaging via Approximate Message Passing,” is published online in the IEEE Journal of Selected Topics in Signal Processing.