Transforming 3D imaging

A start-up company has emerged from Johns Hopkins University with a product that processes millions of data points in real time and turns them into 3-D graphics.

A US start up has developed a new software product that processes millions of data points recorded from airborne and ground-based devices in real time and turns them into lifelike 3D graphics. The graphics can be processed using standard personal computers, saving hours of time and millions in budgets for civilian and government users.

Researchers at the Applied Physics Laboratory at the Johns Hopkins University originally developed the so called QT Viewer software for a defence project that involved collecting data using airborne lidar surveys – a process that uses laser light pulses – and then producing 3D images of the ground. And they have now formed start-up Applied Imagery to commercialise the product.

The challenge in developing the software was to find a way to process huge amounts of data without crashing the user’s computer. APL physicist Michael Roth says, “They really needed a multimillion-dollar supercomputer to process the data from many millions of light pulses, and that wasn’t an option.”

Roth and APL software engineer Kevin Murphy brainstormed ideas, starting with the 3-D video cards used by the video game industry to produce lifelike animation. “We took what video cards were good at and then built what we needed around that,” Murphy says. They created a series of algorithms to manage huge amounts of information based on the “quad trees” data-storage and retrieval method (hence the software’s name: QT Viewer) that proved effective for processing digital topographic and feature data.

They discovered that they could process lidar data in real time with QT Viewer, giving them the pictures they needed immediately and in a virtual reality interactive format that provided options: a panoramic view; and the ability to zoom in on and around natural landforms and structures, or study the terrain through a line-of-sight vantage point.

Depending on what a user needs, the software can provide a high-resolution, real-time display of an area, such as the entire city of Washington, DC, with data samples every 18 inches, or concentrate on a smaller area using a laser snapshot with samples every four inches. The user can even drop a “person” into the middle of a scene and ask the system to reveal what the person can see from any vantage point.

The QT Viewer itself can manipulate models of up to two gigabytes (depending upon hardware available) in real time that have been converted to one of its native formats, as well as importing raw height field data (ordered lists of floating point altitudes), GEOTIFF DEMs, and NIMA DTEDs. It can also generate models from raw ASCII XYZ data (with or without intensity) and create and display either gridded and triangulated surface models or ungridded point cloud models.

The QT Viewer also includes other data manipulation features such as visually cutting/cropping data, overlaying photo imagery, performing mensuration, and generating line-of-sight or shadow maps. It can very quickly generate gridded surface models from raw XYZ data (approximately 60 million points in 10 minutes on a 2.4 GHz Pentium IV with 1GB memory).

The target platform is a mid-to-high range Windows desktop or laptop with a mid-to-high end consumer level 3-D video card.

The QT Viewer is currently the primary XYZ data processing tool of the US Army Rapid Terrain Visualisation program, and is available for license from the APL Office of Technology Transfer.

Applied Imagery has also created two new software packages: QT Reader, for the “casual” user; and QT Modeler, designed for those who want to create new data file models.

On the web