AES E-Library

Mobile robotic platforms are equipped with multimodal human-like sensing e.g. haptic, vision and audition, in order to collect data from the environment. Recently, robotic binaural hearing approaches based on Head-Related Transfer Functions (HRTFs) have become a promising technique to localize sounds in a threedimensional environment with only two microphones. Usually, HRTF-based sound localization approaches are restricted to one sound source. To cope with this difficulty, Blind Source Separation (BSS) algorithms were utilized to separate the sound sources before applying HRTF localization. However, those approaches usually are computationally expensive and restricted to sparse and statistically independent signals for the underdetermined case. In this paper we present underdetermined sound localization that utilizes a superpositioned HRTF database. Our algorithm is capable of localizing sparse, as well as broadband signals, whereas the signals are not statistically independent.