Loudspeaker and Microphone Array Signal Processing

Personal audio systems are designed to deliver spatially separated regions of audio to individual listeners. This paper presents a method for improving the privacy of such systems. The level of a synthetic masking signal is optimised to provide specified levels of intelligibility in the bright and dark sound zones and reduce the potential for annoyance of listeners
in the dark zone by responding to changes in ambient noise.

The Steered Response Power with phase transform (SRP-PHAT) is one of the most employed techniques for Direction of Arrival (DOA) estimation with microphone arrays due its robustness against acoustical conditions as reverberation or noise. Among its main drawbacks is the growth of its computational complexity when the search space increases. To solve this issue, we propose the use of Neural Networks (NN) to obtain the DOA as a regression problem from a low resolution SRP-PHAT power map.

In this paper, we compare the performance of two active dereverberation techniques using a planar array of microphones and loudspeakers. The two techniques are based on a solution to the Kirchhoff-Helmholtz Integral Equation (KHIE). We adapt a Wave Field Synthesis (WFS) based method to the application of real-time 3D dereverberation by using a low-latency pre-filter design. The use of First-Order Differential (FOD) models is also proposed as an alternative method to the use of monopoles with WFS and which does not assume knowledge of the room geometry or primary sources.

Loudspeaker drivers are subject to nonlinear distortion in the low frequency range at high input levels. In sound zone control, distortion not only reduces the acoustic contrast between zones, but also gives perceived artefacts. Standard sound zone methods, such as acoustic contrast control, apply a constraint to the overall input power, but individual loudspeaker drivers are not controlled and the nonlinear distortion is mainly produced by the loudspeaker drivers with the highest input power.

A method to create multipoles comprising a cluster of focused sources by using a linear loudspeaker array has recently been investigated. Directivities in a listening area were confirmed with examples of primitive multipoles such as dipoles and quadrupoles. This paper describes a method to create a sound source having more complex directivity by using a superposition of multipoles comprising a collection of focused sources.

This paper presents a novel framework for active exploration in the context of acoustic \gls{SLAM} using a microphone array mounted on a mobile robotic agent. Acoustic \gls{SLAM} aims at building a map of acoustic sources present in the environment and simultaneously estimating the agent's own trajectory and position within this map. Two important aspects of this task are robustness against disturbances arising from reverberation and sensor imperfections and an appropriate degree of exploration to achieve high map accuracy.

In this paper, we consider the problem of jointly localizing a microphone array and identifying the direction of arrival of acoustic events. Under the assumption that the sources are in the far field, this problem can be formulated as a constrained low-rank matrix factorization with an unknown column offset. Our focus is on handling missing entries, particularly when the measurement matrix does not contain a single complete column. This case has not received attention in the literature and is not handled by existing algorithms, however it is prevalent in practice.

Self-localization of nodes in a sensor network is typically achieved using either range or direction measurements; in this paper, we show that a constructive combination of both improves the estimation. We propose two localization algorithms that make use of the differences between the sensors’ coordinates, or edge vectors; these can be calculated from measured distances and angles. Our first method improves the existing edge-multidimensional scaling algorithm (E-MDS) by introducing additional constraints

The use of an external microphone in conjunction with an existing local microphone array can be particularly beneficial for noise reduction tasks that are critical for hearing devices, such as cochlear implants and hearing aids. Recent work has already demonstrated how an external microphone signal can be effectively incorporated into the common noise reduction technique of using a Minimum Variance Distortionless Response (MVDR) beamformer.