An innovation from the UK will generate scenes on a computer screen showing subtle alterations in lighting as the viewer changes position. The technique aims to reproduce complex changing reflections as someone moves through a scene such as a virtual architectural or television environment.
The concept has been devised by Mel Slater, Professor of Virtual Environments at University College London, whose research focuses on the synthesis of images on a computer screen, or ‘image rendering’.
An example might be a box with reflective sides in the centre of an empty room illuminated by light from a window. The light reflects off the walls and floor as well as the box. The reflected light impinges on other surfaces to create a series of complex interactions and the patterns will be seen to alter as the viewer moves around the room
For a given scene, for example the box in the room, millions of abstract invisible ‘rays’ criss-cross the room. Each ray contains information about the light travelling along its particular path.
One ray may intersect the source of light for the scene, such as the window. The light energy contained within that segment of the ray will be defined by the intensity of the light and its colour. That ray may then impinge on the box, the reflective properties of which will be known.
This segment of the reflected ray will contain information about the reflected light and will then meet another object, such as the wall or ceiling.
One such scene may require approximately 200 million such rays.
Viewing the scene comes courtesy of a ‘virtual lens’, with an ‘image plane’ behind the lens. The light is focused by the lens onto the image plane, which is analogous to the human eye.
Rendering images can occur very quickly as the position of the lens is changed. This is because all the information about the light environment at any point is pre-defined.
On the web at: www.cs.ucl.ac.uk/staff/m.slater/vr/Projects/VLF/index.htm