Abstract

Since the earliest days of multi-microphone live recording, the problem of spillage has dogged the sound engineer. Numerous strategies have evolved including microphone placement, acoustic screening, gating and phase inversion. The acoustic content of spillage can vary from a near direct signal in the case of adjacent mics on a drum kit to almost pure reverb in the case of a live recording with acoustically significant spacing between the performers. In certain physical setups, the problem is unavoidable and inevitably compromises the degree of control that can be exercised when mixing. It is principally for this reason that it is considered ‘a problem’.If spillage could be tamed, then the impact on all production would indeed be profound. Classical recordings might afford the producer radical new “Rock’n’Roll” interventionist techniques. Rock producers might be tempted to allow bands to play live in a room even when a highly “separated’ sound is the ultimate goal, and jazz musicians might avoid having to wear the headphones they so often dread. That is only the beginning.This paper will present a radical new working methodology that can dramatically reduce spillage in a way never before possible by utilising convolution technology that could be coupled with almost any “traditional” recording technique, but will focus on time-delayed and ambient problems. A unique Max/MSP patch will be demonstrated and audio examples will be played to illustrate the effectiveness of the approach. It will delve into commonly understood theory yet demonstrate for the first time, one of tomorrow’s “traditional” recording techniques.