Method could lead to cheap 3D cameras for mobile phones

A new laser-scanning method could lead to 3D cameras that are small and cheap enough to be fitted into mobile phones.

Researchers at Massachusetts Institute of Technology (MIT) have developed a ‘time-of-flight’ system that uses specially designed algorithms to produce a detailed 3D image with just a cheap photodetector and the processor power found in a smartphone.

Typical time-of-flight cameras, which can cost thousands of pounds, measure depth information by illuminating a scene with laser pulses and using a bank of expensive sensors to collect the reflected light and count the time it has taken to return.

But the MIT system uses a single light detector — a one-pixel camera. This means it could become a much more portable alternative to the market leader in consumer 3D cameras — the Microsoft Kinect for the Xbox games console.

‘In consumer electronics, people are very interested in 3D for immersive communication, but then they’re also interested in 3D for human-computer interaction,’ said research leader Vivek Goyal from MIT’s Research Lab of Electronics.

The system uses a common method in the field of compressed sensing: the laser light passes through a series of randomly generated patterns of light and dark squares, which provides enough information for the algorithms to reconstruct a two-dimensional visual image from the light intensities measured by a single pixel.

To add a third dimension to the depth map, the researchers used a technique called parametric signal processing, where the system assumes that all the surfaces the light hits are flat planes.

This simplifies the maths that would be needed to measure light from curved surfaces and allows the algorithm to create a very accurate depth map from a minimum of visual information.

This means that an ordinary analogue-to-digital converter, an off-the-shelf component already found in all mobile phones, and a cheap photodetector can be used to calculate the light’s time of flight.

The sensor takes about 0.7 nanoseconds to register a change to its input, during which time light travels approximately 21cm and all the information from the light is blurred together.

But because of the parametric algorithm, the system can distinguish objects that are only 2mm apart in depth. ’It doesn’t look like you could possibly get so much information out of this signal when it’s blurred together,’ said Goyal.

Telecoms company Qualcomm has awarded the research team one of $100,000 (£64,000) Innovation Fellowship grants to continue the research.