Recreating sound in three dimensions

Realistic computer sound, specifically tuned for each listener, could get a little closer using a new, free public database of acoustic measurements developed by researchers at the University of California, Davis.

Realistic computer sound, specifically tuned for each listener, could get a little closer using a new, free public database of acoustic measurements developed by researchers at the University of California, Davis (UCD)

‘We’ve captured the critical information needed to reproduce actual sounds as each listener perceives them,’ said Ralph Algazi, who led the research team at the UC Davis Centre for Image Processing and Integrated Computing (CIPIC).

Spatially realistic sound reproduction would reportedly help the development of wearable, voice-controlled computers and virtual reality environments for exploring data, Algazi said.

According to UCD, between leaving their source and reaching our ears, sound waves collect changes as they bounce off surfaces such as walls, bodies and ears. These changes, and the small differences between sounds at each ear, let us hear in three dimensions and give us information as to where the sound is coming from.

These changes are captured by the head-related transfer function (HRTF), which vary from person to person depending on factors such as the size and shape of the ears. Knowing HRTFs, researchers can design personalised software or hardware to reproduce spatially accurate and realistic computer-generated sound.

UC Davis researchers Algazi, Richard Duda and Dennis Thompson surrounded subjects with a ring of speakers and placed microphones in their ears. By playing a test signal and comparing it to the recording, they could work out unique HRTFs for sounds from 2,500 different locations arriving at the ears of 43 different people, plus a dummy with different ear sizes.

The team are making their data available to other researchers over the internet and on a CD.

The project was funded by a grant from the National Science Foundation, with sponsorship from Aureal Semiconductor, Creative Advanced Technology Centre, Hewlett-Packard, Interval Research Corporation and the Digital Media Innovation (DiMI) program of the University of California.