Search form

You are here

Simulating the acoustics of 3D rooms

The NESS project is developing next-generation sound synthesis techniques based on physical models of acoustical systems. One key system targeted by NESS is the acoustics of 3D rooms.

Computer simulation of 3D room acoustics has many practical applications such as the design of concert halls, virtual reality systems, and artificial reverberation effects for electroacoustic music and video games.

Below, Brian Hamilton, Reid School of Music, University of Edinburgh, explains the work of NESS. EPCC manages this project, and ports Matlab codes written by the Acoustics PhD students to C and CUDA.

Computational complexity

The sheer size of some listening spaces (eg concert halls), and the desire to represent sound waves up to the limit of human hearing (20kHz), has meant the computational complexity of this problem has been out of the reach of grid-based methods (eg finite difference methods) for some time.

To reproduce the acoustics of the Royal Albert Hall (86,650 m3) up to the limit of human hearing, a grid-based method using the simplest time-integration methods would require, by sampling considerations alone, at least 1TB of memory (approx. 1cm mesh resolution, single precision) and nearly 6 petaFLOPS for a real-time output. A smaller example is the Usher Hall in Edinburgh (15,700m3). This hall requires 190GB of memory and 1 petaFLOPS for a real-time output, which is only manageable by a supercomputer like ARCHER, the national HPC service hosted by EPCC.

Due to the size of these computational loads, commercial packages for the simulation of room acoustics employ simplifying assumptions that allow for the use of cheaper ray-based techniques borrowed from the graphics community, but these do not capture essential details in room acoustics, such as wave diffraction and interference.

Wave-based numerical methods

Wave-based numerical methods, such as finite differences, have the promise of providing all these details. They also allow for the embedding of virtual instruments in a 3D room with two-way coupling between the acoustic field in air and the virtual instrument (see earlier blog post and EPCC News 71).

Simulations of concert halls at full audio rates may be beyond grid-based methods on commodity hardware for some years yet, but current personal computing hardware is sufficient for smaller spaces at audio rates, or concert hall-sized spaces at lower sample rates. But even relatively small simulations may require teraFLOPs for each second of output, leading to long simulation times.

Fortunately, the finite difference algorithms commonly used for these 3D simulations are excellent candidates for HPC on GPUs or multi-core CPUs. These algorithms belong to the class of explicit methods, meaning that each point-wise update in the algorithm may be computed in parallel. Also, the stencil operation at each point is conducive to memory coalesced reads with neighbouring points (threads) – an essential for GPU speed-ups. Without parallelisation, one second of audio output from a room of only 100m3 would require nearly 10 core-hours of computation on a current desktop PC. With parallelisation, this calculation could be reduced to tens of minutes, which is not suitable for real-time use, but is good enough for offline applications. Typically, with GPU-acceleration one can expect at least 10-times speed-up over serial CPU codes. Speed-ups of 40-70 times are common with a professional-grade NVIDIA card, such as the Tesla K20 GPU cards being employed in the NESS project and hosted at EPCC. Further speed-ups can be achieved by using multiple GPU cards in parallel (see study by Craig J. Webb and Alan Gray).

Another important consideration is the accuracy of these numerical methods. Approximation errors may require that the grid resolution be set higher than the bare minimum, and this has significant impacts on computational costs. Generally, as the grid resolution in 3D is increased the memory requirements grow cubically and the required FLOPs grow quartically. While any level of accuracy may be achieved by setting the grid resolution high enough, this strategy quickly becomes impractical for 3D rooms. As such, much of the research in this area, and within the NESS project itself, is focussed on improving the cost effectiveness of these algorithms. Often a trade-off arises between increased accuracy and the ease of implementing suitable boundary conditions for room acoustics.

Implicit methods

One class of algorithms, namely implicit methods, are known to offer higher accuracy at the cost of solving a linear system of equations at each time-step. Implicit methods are often formulated as alternating direction implicit (ADI) schemes, which allow the use of direct linear system solvers, ultimately based on Gaussian elimination. However, the incorporation of boundary conditions suitable for room acoustics with ADI schemes remains an open problem.

Another approach, common in CFD applications, is to resort to iterative methods for the linear system of equations. The Jacobi iterative method turns out to be a simple extension of explicit updates, and is thus straightforward to parallelise on a GPU.

Recently we investigated this approach (see study) on 3D room acoustics with boundary conditions for simplified room geometries). It was found that such implicit methods are suitable for GPU implementations and are more cost effective than explicit counterparts when high accuracy is desired.

Future work will focus on hybridising high-accuracy methods with finite volume techniques, which become necessary for modelling irregular room geometries.

Further reading

NESS

NESS is a five-year European Research Council-funded project currently in its third year. It is an exploratory project, concerned entirely with synthetic sound, in particular, numerical simulation techniques for physical modelling sound synthesis.

The aim of the project is to explore numerical techniques, especially finite difference time domain methods, for a variety of instrument families. As such methods are numerically intensive, part of the project is devoted to looking at implementations on parallel architectures (multicore processors and general purpose graphics processing units).