Engineering – liner noteshttp://www.gavinbradley.com/linernotes
putting music together/taking it apartSun, 25 Mar 2018 00:12:49 +0000en-UShourly1https://wordpress.org/?v=4.7.14Pressed Up Against The Glass: Visualizing And Discussing Soundhttp://www.gavinbradley.com/linernotes/?p=1370
http://www.gavinbradley.com/linernotes/?p=1370#commentsWed, 11 Aug 2010 07:20:01 +0000http://www.gavinbradley.com/linernotes/?p=1370In 1966, during the recording of ‘Tomorrow Never Knows’ for the Beatles’ Revolver album, John Lennon came up with a request phrased in the language of an artist: ‘I want my voice to sound as though I’m a Lama singing from a hilltop in Tibet’.

Producer George Martin had to interpret this request and come up with a concrete plan of action, which he then had to describe in technical terms to the recording engineer, Geoff Emerick, and the other assistants working under him.

Martin’s plan, documented in BBC’s interviews with George Martin on The Record Producers, was to play back the vocal track through a spinning Leslie speaker inside a Hammond organ and record that. I don’t know if the resulting warbly sound was exactly what Lennon had heard in his mind, but he was delighted with the freshness of the effect.

One of the greatest challenges producers face is coming up with the right language in discussions with artists and engineers. We’re all supposed to be sculpting the same thing. Mixing, for example, involves deciding on the placement of each instrument in a song – from its relative volume and clarity to its spatial positioning in the stereo field. Since we all interpret and describe sound differently, how do we take the seed of an idea born in one person’s esoteric imaginings and explain it clearly to a team?

The right metaphor helps.

A friend with musical leanings (and great skills in metaphor) once told me he likes to feel that he can walk into a recording and move around, visiting each instrument at will. He earmarked Steely Dan’s 1977 offering Aja as a good example of a sonically spacious album.

This idea of being able to ‘walk into’ a mix resonated with me, because I’ve always had a similar visualization of sound: I imagine the song is contained in a big glass box immediately in front of me. Sometimes I describe this model to the people I’m working with so we have a language for discussing our mix decisions.

How I Visualize Stereo Sound

In this box, a sound can be anywhere, left to right, in the stereo spectrum. It can be low or high, like the bass of a kick drum or the treble of a cymbal. It can be far away because it’s quiet and soaked in reverb (that residual echo of a bathroom or church), or it can be near because it’s loud and dry (just the direct sound with no reverb).

Trends in mix aesthetics come and go, and most of them are set in motion by an advance in technology. In the 60s drums were fed through analog compressors that tended to add a pleasing distortion, and big boxy reverbs were thrown on vocals using a huge room called an echo chamber. In the 70s 24-track recording became a reality and elements could be separated and controlled, making lush stereo arrangements possible. Digital reverb boxes came of age in the 80s, and reverb was applied liberally to snare drums and vocals.

Today the trend is for both of those elements to be almost completely dry. And recordings are significantly louder these days because with new digital compression plug-ins we can make each element much louder than we ever could in analog.

However when this is the case, as a listener I get the sense that the sound is aggressively being pushed out toward me–and pressed up against the glass–rather than inviting me in to explore. In fact I don’t like to be aware that there is a piece of glass between me and the music, but the more compression is used on the sounds in a mix to unnaturally increase their volume, the more obvious it becomes that the sound is hitting a wall, a limit. (Compression is explained beautifully in this two-minute video on the loudness war.)

This loudness war created a connundrum for radio stations that play both old and new music. Because their broadcast machines are calibrated for the loudness of newer material, older music sounded weak in caparison. That is why, a few years back, many major labels began remastering older music at higher levels in an effort to keep their catalogs in rotation–and to sell the same albums all over again to hardcore fans. Even if this meant the music sounded one-dimensional.

While striving to satisfy radio stations’ technical expectations, I look for ways to limit my participation in this ‘loudness war’ because aggressively mixed music tends to induce listener fatigue quickly. I want people to be able to put the records I produce on repeat! I listen to a lot of indie rock these days because those producers seem to have learned the subtleties of the new technology and found ways to make things impactful without flattening them against the glass in an obvious way.

My hope is that an upcoming technological breakthrough will steer us back toward utilizing the sonic depth we once valued. Most of the music I go back to exists casually in its space, enticing the listener in rather than forcing itself on them.

]]>http://www.gavinbradley.com/linernotes/?feed=rss2&p=13701Studioitishttp://www.gavinbradley.com/linernotes/?p=1309
Mon, 26 Jul 2010 07:58:58 +0000http://www.gavinbradley.com/linernotes/?p=1309A few years ago I switched family doctors, and at my first physical he asked me what time I got up in the morning. He then quickly corrected himself: ‘Oh, sorry, you’re a musician…what time do you get up in the afternoon?’ Ha ha. But I was kind of relieved to find that my body’s stubborn adherence to a late-night schedule was so normal for a musician, even the doctor had a de facto acceptance of it.

In my late teens, when I began writing and recording in earnest, the quiet and dark of the night proved to be an effective ‘blank slate’. Without the overt influence of weather, or the sound of the neighbour’s lawnmower asserting what season it was, or somebody phoning for a chat, it was easier to stay inside a song about almost any subject or feeling. A late schedule worked so well for me that I intentionally booked my university classes and part-time job around it, and all of these years later my body is so attuned to the rhythm it’s a tricky manoeuvre to shift out of it, even temporarily.

Most studios have no windows, partly to reduce the unwanted sonic reflections of glass, but mostly, I believe, to block out the influence of the outside world on creative types who are trying to be inside the work together. There’s a long-running joke sound engineers throw around about having a ‘studio tan’: that sickly pale look fair-skinned individuals get when they see no daylight for weeks on end. And there’s the joke about the ‘studio diet’ that traditionally consists of sugar, caffeine, and nicotine.

But of all the maladies specific to musicians, the one that’s the most fun by far is studioitis.

For those who know the feeling but have never heard it by name, I’ll spell it out. I myself am just coming out of a long bout with studioitis, lasting several months, while working with the very talented Micah Barnes on his upcoming record.

It’s not like tonsillitis or any of the other common itises we hear talked about. Studioitis is more like what happens around 4 AM at a junior high slumber party: everybody starts getting stupid, and everything is funny. Except in the studio the predisposing exhaustion might come at 8 in the evening if you’ve already been looping the same few bars of music for six hours, approaching that point where sound begins to unravel into something very abstract…like what happens when you stare at a word on a page for too long and it starts to look foreign.

Working on an album in an expensive facility usually means blocking out weeks of studio time without days off, because you’re riding a wave of creativity, you need the room to remain set up for you, and you’re on a deadline. So an acute case of studioitis might strike early some afternoon weeks into a project. While staring at a screen that no longer makes sense, or arguing about the conceptual purpose of a guitar riff, or trying to capture a fleeting, ethereal feeling in a vocal take…it will strike, and you will find yourself in a bizarro world where everything is funny.

Last month Micah spent long days here in the vocal booth, in an unbearable heatwave, getting his lead vocals down. My job, producing, meant lots of discussion between takes about motivations and intentions around the lyrics. Soon enough, we found ourselves in Studioitis, Population Two: the funniest thing imaginable was stopping the take to yell ‘LOOK’ or ‘LISTEN’ at each other in the most convincingly angry tone possible. ‘FEEL’ and ‘SMELL’ got thrown in…who can say why? It’s the mad nature of the illness.

Probably my favourite episode of the itis struck 15 or 16 days into sessions with Jon Levine for JackSoul’s second album, ‘Sleepless’. We had been focused for hours on getting a groove right, and, scrolling through drum sounds on a machine I came across a sample of what sounded like a group of middle eastern men yelling ‘HEY!’ It may have been Israeli men, at a wedding…I’m not sure. But definitely the sort of ‘HEY’ you’d hear with traditional middle eastern folk dancing of some kind.

It broke Jon’s composure, so I triggered it a few times until we were both on the floor, laughing loudly…then laughing silently because we were unable to breathe. I slowly and pointedly reached up from the floor to press the button again, once, which started us all over again, and I did it again until Jon was begging me to stop. Lead singer and frontman Haydain Neale, rest his soul, was not impressed. A couple of days later, in the afternoon, the studio secretary came into the room with a bag of candy and mentioned there was a fully-stocked candy store around the corner. Jon and I looked at each other silently for a moment and then bolted out of the studio for our own bags of candy, with Haydain’s yell fading behind us: ‘awwww guys come onnnnnnn!’ He was feeling the pressure of a looming deadline from BMG.

But it was no use…it seems studioitis kicks in when your body actually needs a break from the kind of serious focus music takes. My theory anyway. And believe me, there is no use fighting it.

Oh look–it’s 4 AM…almost time for bed.

]]>I’m Only Inhuman: Vocal Trickery Through The Ageshttp://www.gavinbradley.com/linernotes/?p=612
http://www.gavinbradley.com/linernotes/?p=612#commentsSun, 06 Sep 2009 20:31:27 +0000http://www.gavinbradley.com/linernotes/?p=612Not content to replace pianos with synthesizers and drum kits with drum machines, producers have spent the last few decades pushing against that final frontier: the mechanization of the human voice. How have we dehumanized ourselves? Let me count the ways.

Classic 70s Korg Vocoder

1. The Vocoder

In the mid-70s ‘robot voice’ tracks began turning up in earnest. Kraftwerk was at the forefront with the Vocoder (not the Vocorder, as it is so often mispronounced, but the Vocoder: it is a ‘coder’ of ‘vocals’). For many years the Korg Vocoder was the standard unit, but all vocoders work on the same principle: you sing into a mic and the electric signal created by your voice shapes the sound coming out of the synthesizer.

One of the first commercial hits with a female robot vocal upfront was ‘Funkytown’ by Lipps Inc., in 1980. In 1983 Styx gave us ‘Mr. Roboto’.

Orange Vocoder Software Plugin

In recent years software plugins like the Orange Vocoder have appeared, eliminating the need for another physical keyboard taking up space in the studio. The sound is a little less cutting–as the raw, aggressive squelch of old analog vocoders are still somewhat outside the realm of the computer–but tracks like 2002’s ‘Remind Me’ by Röyksopp have carved out a different niche for the software vocoder’s silkier sound. Korg’s MicroKorg keyboard and Ensoniq’s rackmount DP-4 have kept hardware vocoders alive.

2. The Talkbox

Stevie Wonder began using a talkbox in the early 70s, but after Parliament-Funkadelic alumnus Roger Troutman mastered the physically challenging device and formed funk band Zapp, radio got a steady stream of funk/R&B hits through the first half of the 80s.

A Heil Talkbox With Hose

A talkbox setup is in many ways the reverse of a vocoder. A synthesizer–typically a Yamaha DX100–set to produce a very strong, pure tone is plugged into the talkbox. A speaker driver inside the talkbox pumps the focussed sound out through a hose which is inserted into the corner of a singer’s mouth. As the singer forms words, their mouth physically shapes the sound from the synthesizer. This happens in front of a mic, which picks up the shaped synth sound coming out of the player’s mouth.

Roger Troutman’s command of the instrument shines on ‘I Wanna Be Your Man’.

For a visual demonstration, check out the great Stevie Wonder covering ‘(They Long To Be) Close To You’.

Akai S1000 Sampler

3. Retriggering

Evolving through the 80s, samplers like the Synclavier, Fairlight and Ensoniq Mirage were initially intended to realistically recreate acoustic instruments. The Emulator seemed to encourage more creative sampling however, and the Emu SP1200 was the sample-based drum machine that spawned the dopest hip hop beats. But by the late 80s the Akai S1000 was the rackmount sampler of choice, and the fact that you could expand the memory to load entire vocal tracks into it made retriggered vocal riffs the next logical step in house music.

In 1989 Black Box sampled parts of Loleatta Holloway’s vocal on 1980 disco hit ‘Love Sensation,’ placing it over new piano chords and a housebeat. The rhythmic retriggering of her impassioned vocal–the computerized sonic repetition of those growling phrases of sound–brought a clean, futuristic sensibility to dance music, an effect akin to referencing ‘Love Sensation’ in quotations. However at first those quotations were used without the appropriate footnotes…so court cases followed.

4. Stuttering

Through the early 90s house producer MK (Marc Kinchen) ran with vocal sampling, taking the retriggering concept to extremes.

Four examples of his work follow. On his remix of the B-52s’ ‘Tell It Like It T-I-Is’ he experimented with stuttering the last syllable of individual lines. This became a common technique borrowed by many producers, so MK forged further, finding a more individualistic practice: he began pulling single syllables from various places in the vocal track, reordering them to create hooky melodies (with nonsensical words). His 1993 remix of the Nightcrawlers’ ‘Push The Feeling On’ made massive waves, superseding the original version of the song without using a single intact vocal line. In demand as a remixer, he created hooky vocal stutters for the Pet Shop Boys on his remix of ‘Can You Forgive Her’ and for Blondie on his updated remix of ‘Heart Of Glass’, dropping the full vocal in between stuttered sections…and reportedly turning out one remix per week at $15-20K.

The Amazing Slow Downer

5. Timestretching

In the mid-90s Armand Van Helden took the baton, building a brand in part on the innovation of cheeky vocal processing techniques. ‘Timestretching’ audio using software plugins is a commonplace practice now, to make beats match in tempo or to conform an acapella to the desired speed of a remix. At the time, Armand Van Helden pushed the relatively new technology to the limit, placing ridiculously elongated vocal lines in the climaxes and dropouts of his tracks as dancefloor payoffs.

A few examples of this new intersection of the machine and the biological: he stretches the line ‘Sugar Daddy’ as a re-entry to the beat in his remix of CJ Bolland’s ‘Sugar Is Sweeter’; he stretches the hook vocal to prepare us for a drop out of the beat in his own track ‘The Ultrafunkula’ (the same track also exists as ‘The Funk Phenomena’); and finally during a dropout in his remix of Janet Jackson’s ‘Got Til It’s Gone’ he obsessively retriggers the sample of Joni Mitchell singing ‘don’t it always seem to go…’ (from ‘Big Yellow Taxi’), building to an unidentifiable, impossibly timestretched spoken line before dropping the beat.

All samplers and audio production software have a timestretch function, but it sounds like he used an early version of the ‘Amazing Slow Downer’ Mac program. Either that, or the application was conceived later specifically to achieve that Armand Van Helden sound.

Antares Autotune Plug-In

6. Auto-Tune

In 1997 a company named Antares marketed a rackmount box that could automatically correct a singer’s pitch in real-time. Shortly afterward, they released a software plug-in that did the same thing but also allowed graphical re-drawing of the pitch of individual notes in a recording.

Strangely perfect-sounding vocals began to appear on pop, country and R&B recordings, like the silky layers of Brandy’s voice on ‘Almost Doesn’t Count’ and ‘Angel In Disguise’ from her 1998 album ‘Never Say Never’:

Applied subtley the processing isn’t obvious, but the singer’s voice does take on an otherworldly pitch-perfection that we’ve all now come to expect. Singers, producers and engineers now assume that one of the phases of recording will be tuning the vocals.

7. Abused Auto-Tune

Put auto-tune into overdrive and you get what became known as the ‘Cher Effect’. In an interview in Sound-On-Sound producers Mark Taylor and Brian Rawling attributed the ear-twisting effect they applied to Cher’s vocals on 1998’s ‘Believe’ to a complicated vocoder setup. But what was obvious to most producers was exposed soon afterward: this was an auto-tune plug-in set to ruthlessly round the note up or down, causing lightning fast, perfect-pitch trills in the vocal. Madonna producer Mirwais took things a step further on tracks like ‘Impressive Instant’, redrawing the pitches of notes to create impossible, unexpected jumps in the melody.

Melodyne: The New Auto-Tune

8. Melodyne

In recent years hip hop hook singers like T-Pain, Lil Wayne, Akon and Kanye West have recorded exclusively with an effect universally referred to as auto-tune. I’m convinced however that these guys are mostly using a newer program called Melodyne. It works in a similar fashion but allows much more precise editing of multiple layers of vocals, as well as control over an additional attribute of the performance: the tonal quality of a singer’s voice–from munchkin to giant– independent of the pitch.

Kanye West’s vocal on ‘Heartless’ and T-Pain’s vocal on ‘Chopped And Screwed’ (a song whose subject incorporates reverence to vocal trickery) demonstrate the metallic sound of multiple takes of the lead vocal processed through Melodyne. On 2009 single ‘D.O.A. (Death Of Autotune)’ rapper Jay-Z started a backlash against the generic use of tuning as a crutch for singers.

Celemony, the makers of Melodyne, will soon be releasing a new version that will be able to isolate and manipulate the pitch of each note within chords on recordings (as opposed to individual notes). It’s anybody’s guess where this will take producers next in the field of vocal cybernetics.

Digitech Vocalist

9. Digitech Vocalist

For some reason Digitech is not one of the major go-to companies when it comes to effects boxes, but they’ve always pushed the envelope of digital processing. Imogen Heap’s 2005 hit offering ‘Hide And Seek’ was entirely acapella-and-effects, bringing a fresh ear-bending sound that could have been a traditional vocoder but for the oddly futuristic slides between notes. The lush, fanned out harmonies were created from single vocal tracks by the Digitech Vocalist box, which is able to digitally extrapolate live harmonies on the spot based on chords played on guitar or keyboard.

Look for additions to this article as new vocal processing technologies are used and abused by producers.

]]>http://www.gavinbradley.com/linernotes/?feed=rss2&p=6123For Those About To Make An mp3…http://www.gavinbradley.com/linernotes/?p=423
Mon, 22 Jun 2009 07:07:00 +0000http://www.gavinbradley.com/linernotes/?p=423…these guidelines will ensure that you don’t populate the web with awful sounding files.

Don’t use ‘Joint Stereo’. This saves marginally on file space by allowing the left and right channels to share information as necessary, which results in warbling treble.

Don’t use ‘Variable Bit Rate (VBR)’. This allows the file quality to lower when there’s less complexity in the music, and again you can hear the treble change as the file quality shifts fluidly like this. Always use Constant Bit Rate (CBR).

The Sample Rate needs to be 44.1, like a CD. Lower it and lose quality fast.

The only thing you should play with if you want to create smaller files is the Bit Rate, and don’t go below 128. 320 is very close to CD quality, and since most of us have high speed access now we should always be using it.

Do not make an mp3 of an mp3, or an mp3 of a CD that was burned from mp3s. This makes worse and worse sounding files (see below for why).

These guidelines go for whatever program you use for your mp3s. But to set this up in iTunes, open Preferences/Settings and click on the ‘Import Settings’ button. Where it says ‘Import Using:’ select the ‘MP3 Encoder’. Beside ‘Setting’ select ‘Custom…’

Then set things up this way:

In iTunes you do not want to check ‘Filter Frequencies Below 10 Hz’ because although we can’t hear bass that low, its absence does affect the impact of the sound we do hear. Check ‘Smart Encoding Adjustments’ though. Might as well be smart.

On another note…media files come in two types: ‘lossless’ (large files that capture all of the information (used for large-format print applications, store-bought CDs and DVDs) and ‘lossy’ (smaller files that approximate the sound or image, but are easier to share on the net).

So if you’re a graphic designer and you need high quality image files to print posters from, you use TIFF or EPS files…but if you’re designing for the web you use jpg or gif files. They’re not as visually clear, but on a small web page they look fine. What you don’t want to do is make a jpg of a jpg because the image will gradually degrade.

jpg of a jpg of a jpg

In audio, if you want to retain all of the information perfectly you use WAV, AIF or SD2 files (the highest quality file you can get from a standard CD is a 44.1 kHz stereo WAV/AIF file). But ever since file sharing began on the net, we’ve relied increasingly upon mp3, and mp4/AAC files.

If you make an mp3 from a CD or WAV file with the settings described above, you’re getting something that is virtually indistinguishable from the original CD. But if you open that mp3 file to make changes to it (like edit the beginning and end of it) and then you save it again, you’re making an mp3 of an mp3…and that’s akin to repeatedly making a jpg of a jpg, losing information each time. Here’s what you get if you keep making lossy files of lossy files:

Below is what the sound wave looks like for the three passages you just heard…details in the wave get lost with each generation:

mp3 of an mp3 of an mp3

It’s especially important to be vigilant about this issue if you’re a producer who’s sampling off of mp3s to make your beats. Realize that when you’re done mixing your CD-quality WAV file, the first thing that’s going to happen is someone is going to make an mp3 of it. And then some of the elements in your track are going to lose impact because the source files were already mp3s.