Holographic television is one of those staples of science fiction that has proved far more difficult in practice than in theory. But a team in Germany is now showing that a combination of clever image processing and a technology most familiar from advanced jet fighters could make 3D TV possible with off-the-shelf components.
The principles behind still holograms are well known, said Hagen Stolle, chief executive of SeeReal. But transferring them to TV means producing a flat image that the eyes interpret as 3D, and which moves.
There are two problems with that, said Stolle. ‘You need very high resolution. In classic holography, you can have 5,000-10,000 lines/in on your film. But for an HD TV 3D picture, you would need to calculate billions of pixels, and that’s impossible. They would need to be smaller than a micron, and you would never be able to calculate all of them.’
So the SeeReal team decided to take another tack. Although when you look at a TV screen, you think you can see all of it, you cannot. The eye only looks at a small portion of the picture, but it scans the screen rapidly and the brain assembles all the information to produce the impression of the whole picture.
‘We thought, why waste all that information that nobody sees? Why not limit our viewing range to a very small viewing window and then track it in space?’ said Stolle.
The SeeReal system, now at the prototype phase, uses a vision tracking system to watch the viewers, homing in on the pupils of their eyes and analysing which part of the screen they are looking at in each system.
Vision tracking is used in military targeting systems and in devices that allow severely disabled people to control computer equipment. However, its uses in more mainstream applications are limited.
SeeReal’s system uses two VGA cameras to track and triangulate the position of up to four viewers within range of the display, calculating the X, Y and Z co-ordinates of their gaze on the screen.
With this information, the system then produces the holographic image to fill only the ‘viewing window’ of each eye — an imaginary box, 10-20mm square, in front of the eye, containing all the image that it can see at any one time.
The system knows the exact location of the 3D image on which the eye thinks it is focusing, and ‘ray-traces’ from the viewing window, through that point and on to the screen, rendering only that area as a perfect holographic image.
As the eye moves around the 3D scene, the cameras track their movement and the system continues to render only the part of the screen needed to produce the image the viewer is looking at. But because of the perception tricks programmed into the brain, the viewers are not aware that only a tiny fraction of the image exists at any one time.
Because only a small part of the image needs to be produced in detail, the amount of computation needed is reduced dramatically. Stolle said a normal PC can handle the calculations.
The imaging part of the system consists of a normal flat-screen display, tweaked slightly with a different type of liquid crystal (LC) component and a polarising filter so that it modulates the phase of the light waves produced by the individual pixel LC units rather than their amplitude.
The phase modulation means that several overlapping sub- holograms can be made without interfering with each other, which allows several people to view the scene from different angles.
The SeeReal prototype uses a monochromatic display, with three megapixel resolution. The system produces the image seen by each eye in the appropriate location and illuminates it with the display’s LED backlight, which produces light waves that are coherent, or all in the same phase. The LC units shift the phase of the light as it passes through, and the polarising filter then allows only a portion of the light to reach the viewer. This creates the light and shade of the image.
‘That’s why our display today is only red and rather dark, but we’re only proving the point that you can do it,’ said Stolle.
The other limitation of the prototype is the viewing angle: the image can only be viewed from a few degrees left and right of perpendicular to the screen, and only a few centimetres of back-and-forth movement are possible. ‘But in principle you can track in all three directions, and viewing angles of around 60° should be possible,’ claimed Stolle.
Switching to full-colour scenes needs faster displays, said Stolle. ‘Our current display isn’t optimised for speed: it’s about a 25 millisecond response time. But we could use an RGB LED backlight with a 1-2 millisecond display. We’d modulate the scene with the panel, then multiplex the colours with the backlight, and you’d have the 3D scene.’
For full holographic television, screens with pixel sizes around 30-40micron would be needed and again, claims Stolle, such displays are available.
Stolle believes that video gaming and desktop applications, such as medical imaging and engineering design, are likely to be the first commercial applications of the technology, but full-scale holographic TV should be possible. SeeReal is now seeking a large-scale partner to develop the technology.