Virtual Navigation of Ambisonics-Encoded Sound Fields

This project was sponsored by the Sony Corporation of America


Virtual navigation of ambisonics-encoded sound fields enables a listener to explore an acoustic space and experience a spatially-accurate perception of the sound field. In the ambisonics framework, a measured 3D sound field is represented by its spherical harmonic expansion, and each ambisonics signal represents a different term of the expansion.

Applications of virtual navigation may be found in virtual-reality reproductions of real-world spaces. For example, to reproduce an orchestral performance in virtual reality, navigation of an acoustic recording of the performance may yield superior spatial and tonal fidelity compared to that produced through acoustic simulation of the performance. Navigation of acoustic recordings may also be preferable when reproducing real-world spaces for which computer modeling of complex wave-phenomena and room characteristics may be too computationally intensive for real-time playback and interaction.

The following write-up discusses some fundamental aspects of virtual navigation and describes current avenues of research.

1. How does virtual navigation work?

One technique for virtual navigation, proposed by Schultz and Spors in their 2013 paper, involves transforming from the spherical-harmonic representation of the sound field into a finite-term plane-wave expansion. In this representation, each signal corresponds to a plane-wave incident from a different direction. Translation of the listener can then be achieved by applying a frequency-domain phase-factor (or group delay in the time domain) to each plane-wave term, based on the direction of travel of the listener relative to the propagation direction of each plane-wave.

Alternatively, one can directly compute a new set of ambisonics signals by re-expanding the sound field about a translated expansion point using frequency-domain translation coefficients (see Fast Multipole Methods for the Helmholtz Equation in Three Dimensions by Gumerov and Duraiswami). These techniques are described in more detail in this paper, and real-time implementations are currently being developed.

2. What are the limitations?

A well-known limitation of the ambisonics framework is that a finite-order expansion of a sound field yields only an approximation to that sound field, the accuracy of which decreases with increasing frequency and distance from the expansion center, so the prospect of navigating such a sound field is inherently limited. Indeed, the techniques for virtual navigation described above have been shown to introduce spectral coloration and degrade localization as the listener navigates farther away from the expansion center. Objective metrics and listening experiments are currently being developed in an attempt to quantify and predict such effects.

Another well-known limitation of ambisonics theory is that the captured sound field is only valid (and therefore, is only likely to be accurate) in a source-free region around the expansion center. Therefore, the usable region of the captured sound field is, in principle, limited by the nearest sound source. Techniques to estimate the sound field beyond such near-field sources are currently being explored.

3. How can these limitations be overcome?

A navigational technique recently developed at the 3D3A Lab employs an array of ambisonics microphones (which are themselves arrays of microphone capsules) throughout the sound field (or, equivalently, sampling a synthetic sound field at discrete positions). Doing so not only provides a more accurate description of the sound field at any intermediate position between the microphones, but also enables navigation near to and around sound sources. This technique and its advantages are described in this paper.