Roundup: Project Mastering Master Class

Without blowing our horn like, y’know, too much, we really have been on top of the “project mastering” trend for some time. So for this roundup, we’ll assume you have the basics figured out (you can master at home, but you better know what you’re doing), and concentrate on specific techniques that relate to mastering. Some of these relate to tracking and mixing, too. Also, rather than having our usual review format, we’ll instead pick some cool features from various digital audio editing programs, and show how to apply them to real-world situations.

Without blowing our horn like, y’know, too much, we really have been on top of the “project mastering” trend for some time. So for this roundup, we’ll assume you have the basics figured out (you can master at home, but you better know what you’re doing), and concentrate on specific techniques that relate to mastering. Some of these relate to tracking and mixing, too. Also, rather than having our usual review format, we’ll instead pick some cool features from various digital audio editing programs, and show how to apply them to real-world situations.

As to which digital audio editor is best, they all do the job—but they all do the job differently. Also, some have unique features that are essential to some people, but irrelevant to others. I’m very fortunate, because I get to evaluate all these programs while doing reviews, then use whatever I want with music projects. And frankly, I use everything. I’ll often go through three or more programs to get the final result—even crossing back and forth between Windows and Mac.

Of course, having multiple products adds up price-wise, but the price of all these programs adds up to about the same as the reel-to-reel two-track machine I used back in the day. We’ve definitely come a long way.

Fig. 1. The upper window shows the response prior to compensating for inconsistencies in the bass range and a wicked peak at about 700Hz; the lower window shows the response for the fixed version.

SO WHAT’S THE DEAL WITH “GOLDEN EARS”?

But first . . . for years, people have talked about the need to use professional mastering engineers, with the usual reasons being “well, they’ve done hit records and have golden ears.” But what are the characteristics of “golden ears” for mastering?

Simple: The ability to detect extremely subtle changes. This is crucial for two reasons. First, applying a processor to a mixed stereo track affects everything—if you boost a particular frequency, you’re boosting that frequency for drums, voice, bass, etc. This is very different from processing an individual track, where it’s often desirable to paint in broad strokes.

Second, mastering typically involves lots of little edits, but these add up to a not-so-little result. Every action does have an equal and opposite reaction; alter the dynamics, and you alter the mix. Boost or cut at a certain frequency, and it will make other frequencies seem softer or louder in comparison.

This is where many wannabe mastering engineers fall short, because they apply “recording thinking” to “mastering thinking.” Mastering is the art of subtlety, and you have to understand which small changes you need to make for a big result.

Fig. 3. Sound Forge has a great fade feature that uses a breakpoint envelope, where you can add as many points as you want to create any arbitrary curve. It’s also possible to preview the fade before committing to it.

THE EQUALIZATION TWO-STEP

If I could only have one processor for mastering, it would be EQ. I actually use two independent EQ processes. The first fixes problems, while the second adds subjective tonal improvements. For fixing, I call up the file in Har-Bal to see what’s going on in the overall audio spectrum (Figure 1). The heart of the program is an 8,192 stage FIR equalizer, but it also displays an average of the energy distribution across the audio spectrum in 1/6 octave bands (you can change this, but 1/6 octave is my preferred setting). Looking at the display can provide an “early warning system” for any frequency response anomalies, although of course you can’t make any final determinations without using your ears (and brain). Common problems are:

· Bass doesn’t roll off at subsonic frequencies. Cutting everything below 20–30Hz can clean up the sound and open up a bit more headroom (also see the section “Remove the Subsonics”). · Bass range peaks and dips. This is usually due to room issues where the recording was made, but be careful in your analysis—there may be a major kick drum that causes an intended peak. However, this tends to be a single blob of energy, whereas room issues cause a curve that looks more like “ripples” due to multiple resonances. · Too many highs. What with distorted guitars, aliasing that generates weird harmonics, digital clipping, and the like, today’s recordings sometimes seem harsh. A little highfrequency rolloff can tame harshness without reducing the perceived high frequency response. · Midrange issues. Unexpected midrange peaks, attributable to a variety of factors, can sometimes give a “honking” effect. These may be subtle, but you’ll still notice the sound is smoother when you correct them.

Fig. 4. When you need to raise or lower the gain or otherwise process individual sections, the ability of several programs to add an automatic crossfade eliminates clicks due to abrupt level changes.

Of course, you don’t need Har-Bal to do these kinds of fixes; you can use a parametric EQ to reduce nasty peaks. The trick here is to narrow the peak and boost to an absurd degree, then sweep the frequency to hear which frequencies slam the level into distortion. You can then cut the response of that frequency to reduce the peak, and smooth out the sound.

For the second EQ process, I’ll tend to use a parametric or a “broad” EQ (I’ve always liked Pultec units for this, and the Universal Audio emulation is very good). For example, I might add a general upper midrange lift to give vocals and guitars more definition, or boost the bass a bit to give the kick more authority.

BANISH THE NOISE

If you’re lucky, a cut to be mastered will have a few seconds of “air” at the beginning, rather than be cropped right up to the start. System hiss and noise will be present in this “silent” part. Granted, this might seem very low-level, but removing low-level noise is like blowing the dust off a painting— everything looks the same, it’s just more defined.

Fig. 5. The multiband stereo image widener in iZotope’s Ozone 4 can also narrow the image. In this case, lower bass frequencies are being pulled to the center.

I generally use Sony Sound Forge’s noise reduction, and choose the most natural algorithms and minimal reduction (the less reduction you need to do, the better). If a file already has relatively low noise to begin with, noise reduction can make it sound perfect without creating audible artifacts.

This type of noise reduction (Adobe Audition incorporates a similar noise reduction module) requires defining a region of pure hiss, called a “noiseprint.” This is analyzed, and extremely sharp/precise filtering removes these specific frequencies. You can edit the strength of the noise reduction, and even edit the noiseprint manually. Also, Sound Forge lets you include noise reduction in effects chains, which is helpful: You can apply two subtle processes instead of a single, more drastic one.

Fig. 6. Waves’ plug-ins are very popular for mastering, but don’t overlook the LinEQ Lowband Stereo for removing subsonics and rumble.

On the Mac, and also with Windows DAWs, my plug-in of choice is BIAS’s SoundSoap Pro 2 (Figure 2). But I also use it with digital audio editors, because while it can do the “isolate a piece of noise and eliminate it” trick, the latest version does Adaptive Noise Reduction where the program decides what’s noise and what isn’t by itself, and can even change that definition over the course of a file. For an example of why this is useful, consider a noisy file that’s been compressed, so the noise changes over time—Sound- Soap Pro 2 can adapt to the changes in hiss levels. It also includes other restoration tools (click/pop removal and hum).

However, note that most digital audio editors include some kind of noise reduction; Audition offers several different types, and Steinberg Wavelab’s DeNoise even offers adaptive noise reduction.

THE RIGHT FADE

I request that people submit files to me with no fades, and instead specify where they want the fade to begin and end. The main reason is so I can create the perfect curve—a lot of the files I get have linear fades, which don’t sound all that great. But the other reason is so there’s material just in case the fade needs to be extended.

This situation happened recently while mastering a cut by Norwegian musician Ronni Larssen. He expected the fade to occur over an instrumental figure at the end, but it seemed like not quite enough time for a fade, and besides, I liked the figure. So, I copied the last figure, and pasted it in twice (using automatic crossfading) so the figure repeated three times at the end.

Next was taking advantage of Sound Forge’s fade feature, which can define pretty much any fade curve you want (Figure 3), as well as preview it. I went for a fairly quick fade, then drew a logarithmic fade to the end.

MASTERING WITHIN MASTERING

With a recent mastering job, one section bothered me: a drum fill lead-in to a chorus just didn’t “pop” enough. Instead of kicking the energy up a notch, the quiet fill brought down the song..

No problem: I defined that fill as a region, and increased the gain by 3dB. With Wavelab 6, it’s important to have regions begin and end on precise zerocrossings, as increasing or decreasing level where there’s level can cause a click due to the abrupt level change. Unfortunately, zero crossings don’t always occur in the same place on different channels.

BIAS Peak, Adobe Audition, Sound Forge, and others get around this by introducing a small crossfade between the altered and non-altered sections (Figure 4). The screen shot shows Sound Forge because its graphical representation clearly shows what’s going on, but I first became aware of the value of this approach with BIAS Peak Pro, when I needed to change levels or tonalities of individual notes with classical harpsichord and guitar projects.

PULL THE BASS TO CENTER

Bass belongs in the center. With vinyl, that’s a requirement so that the stylus doesn’t jump out of its groove; these days you can put bass wherever you want from a technical standpoint, but for my taste, it still works best in the center. Bass is non-directional compared to highs, so having it emanate equally from stereo loudspeakers on playback makes sense.

One of my “secret weapon” techniques for giving rock/pop tunes more power is the Multiband Stereo Imaging processor in iZotope’s Ozone 4 (Figure 5). Although these types of processors generally widen the stereo image, with Ozone 4 you can narrow the stereo image by choosing a negative “widening” value. Because it’s a multiband processor, you can apply this to the bass range only, and “anchor” the song’s low end.

REMOVE THE SUBSONICS

People aren’t going to hear what’s below 20Hz, so you might as well nuke any energy down there. If there are any subsonic signals—which is increasingly likely in a digital world, where sounds can be transposed into the subsonic range—they’ll take away from available bandwidth, and in some cases, muddy the sound.

Although this roundup isn’t really about plug-ins, for low-cut filtering I use Waves’ LinEQ Lowband Stereo (Figure 6), because there have been times when I haven’t heard any difference with it inserted, but the meters indicated I’d gained back headroom. It’s your basic linear phase surgical EQ tool, and is ideal for this type of application.

WHY SPECTRAL VIEW ROCKS

Wavelab and Adobe Audition include the option of spectral view editing (Sound Forge has a spectral display, but you can’t do any actual editing). The 1/10 issue includes a techniques article on using spectral editing to remove noises, scrapes, and the like from nylon string guitar.

Fig. 7. Adobe Audition’s Spectral View is ideal for making edits with surgical precision—you can even lower the level on a single drum hit, or remove the cough from a live recording.

Spectral view presents audio not as a waveform, but how energy is distributed in the spectrum. For example, in Figure 7 the bass notes are yellow, with brighter yellow meaning that the note is louder. It’s possible to identify, isolate, and edit specific events, like a kick drum, cough, finger scrape, and the like. With Audition, after selecting the region you want to edit, you can change level (e.g., attenuate it so it’s not as prominent, or boost it) with the level control that appears automatically, or do any other processing—compress just a single kick note, for example.

I don’t use spectral view for general mastering, but only if problems need to be solved—it’s more of a technical process than a musical one. But when you really need to get “inside” the waveform, there’s no better option.

MICRO-MASTERING

Clients want loud cuts, but I’d rather not put a limiter on the output and squash the file to death. “Micro-mastering” is an effective, albeit tedious, way to increase overall level, while minimizing the negative effects of any limiting or compression that does get used.

Fig. 8. The “micromastered” file is at the top, the original file at the bottom, and Wavelab’s peak-finding dialog is toward the right. The peaks on both files are at 0, but the micromastered file has a higher average level.

This works on the principle that any mixed file has occasional peaks that are significantly higher than other peaks. For example, suppose that 12 peaks have values between –2dB and 0dB, and all other peaks fall below –2dB. If we reduce the 12 peaks to –2dB, then it’s possible to raise the level of the entire file up by 2dB, thus gaining 2dB of “loudness” without using compression.

Finding those peaks is easy with Wavelab’s Global Analysis feature. First, decide how much headroom you want to open up—I’d suggest 2dB until you get a feel for how this process works. Go Analysis > Global Analysis, and click on the Peaks tab. To find one peak at a time, enter 1 for the maximum number of peaks to report. Click on Analyze, then click on the Maximum field for either the right or left channel. Click on Focus, and Wavelab jumps to that peak.

With snap to zero crossings selected (it’s under Options, or just type Z), define the half-cycle containing the peak as a region, then invoke normalization to change the peak level for this region to –2dB. If the corresponding region in the other channel exceeds the peak you just reduced, normalize that section as well while you’re in the same general area.

Keep working through the file, a peak at a time, until the maximum peak Wavelab finds is –2dB or less. Your work is done for that channel. Similarly, reduce peaks on the other channel to –2dB.

When all peaks have been tamed to –2dB, use normalization or gain change to bring up the file level (Figure 8). The file will be noticeably louder, but you’ll notice no artifacts from compression because you haven’t compressed anything. Furthermore, anything lower than –2dB has been untouched. Now if you want to add some maximization, if you had originally wanted to boost the overall level by 6dB, you only need to apply 4dB. The result: a loud cut that can “compete” level-wise with other music, but which has a more natural sound that retains dynamics better.

WHAT ABOUT THIRDPARTY PLUG-INS?

Although digital audio editing programs come with a plethora of plug-ins, don’t overlook what third-party plug-ins can bring to the party. Universal Audio and TC Electronic (with their PowerCore) offer several mastering-oriented plugins hosted by hardware so they don’t load down your CPU, and previous issues of EQ have covered useful mastering plug-ins like tape emulators. Also, note that McDSP has announced upcoming availability of many of their plug-ins in VST and AU formats; several McDSP plug-ins are superb for mastering, so this is good news.

Fig. 9. Samplitude includes several mastering-level effects, including this multiband dynamics processor.

As to other favorites, this is a very subjective area but I like PSP Audioware’s compressors, EQ, and their Vintage Warmer; and of course, Waves makes outstanding mastering plug-ins. I also find some of SSL Duende’s plug-ins invaluable when you want to add “character” but if your budget is tight, check out what Voxengo has to offer—their plug-ins are often underrated. URS makes several cool plug-ins, but for me the ones that stand for mastering are those that model mixer stages, transformer inputs, and the like—they’re subtle, but subtle is often exactly what you need. And for a one-stop solution, it’s hard to beat Ozone 4.

TRANSFORMING A DAW INTO A MASTERING MACHINE

Although there are many similarities among digital audio-related programs, digital audio editors still exist as a separate product category because they put individual bits of digital audio under the microscope, while DAWs are about dealing with large numbers of hard disk, MIDI, and virtual instrument tracks. Still, some DAWs are slowly but surely turning into mastering machines.

Magix Samplitude (Figure 9) and Adobe Audition have always emphasized a combination of multitracking and mastering. More recently, PreSonus’ Studio One (Figure 10) has integrated mastering with tracking/ mixing in a highly evolved way—for example, edits to a mix are reflected in the playlist that burns a CD. But even programs that aren’t billed as mastering software per se can often be pressed into service.

Fig. 10. Studio One has a separate window for not only mastering individual cuts, but assembling them into a playlist, adding master effects, burning CDs, and publishing to the Web.

Take Cakewalk Sonar: It has several phase linear processors, a spectrum analyzer, dithering, markers that identify peak levels, high-resolution metering (down to –90dB), and other mastering-oriented tools. While Sonar’s default workflow isn’t particularly suited to efficient digital audio editing, customization can make it “feel” more like a digital audio editing program (Figure 11).

For example, simplifying menus so that they show only essential functions helps improve workflow; there’s usually no need for MIDI, measures, staff view, lyrics, virtual instruments, and video. I renamed the “Process” menu “DSP” and placed all audio DSP functions under it, and as I’ve been using Sound Forge since the mid-’90s, I re-arranged and re-named Sonar’s File menu to be more like Sound Forge’s.

I also created a layout for digital audio editing, with a large track view to make waveform viewing simpler, and a very restricted console view that shows only the master bus (with levels set to 0). This recalls Wavelab’s master section, but there’s a practical reason for splitting the mastering load into destructive “technical” fixes that involve DSP (like getting rid of clicks, glitches, noise, etc.), and “artistic” fixes that usually involve plug-ins (like how much EQ, limiting, or other “spices” to add). I make technical fixes on the track view itself, but the plug-ins get loaded into the master console strip. It’s therefore possible to bounce the file to another track through the master effects, and if needed, do multiple bounces with different variations that the artist can evaluate.

Fig. 11. This window layout optimizes Sonar for digital audio editing. Note the “Master Strip” to the right; on the waveform itself, a peak is about to be reduced by a few dB so that normalization can give a higher average level.

Another advantage is that when saving the project, all these variations are kept as separate tracks; when working on the “technical” elements, you can put temporary dynamics and EQ processing in the master strip for a better idea of what any changes will sound like after mastering.

For me, the biggest shortcoming of typical DAWs is a lack of noise reduction, but as mentioned previously, BIAS SoundSoap Pro 2 can take care of that. Like many other DAWs, Sonar includes dithering (I use the noiseshaped Pow-r 3 option, even though it’s the most CPU-intensive) and the ability to burn CDs.