https://www.masteringthemix.com/blogs/learn.atomMastering The Mix - Learn2019-03-11T01:00:00+00:00Mastering The Mixhttps://www.masteringthemix.com/blogs/learn/decoding-the-mix-in-da-club-50-cent2019-03-11T01:00:00+00:002019-03-11T01:00:00+00:00Decoding The Mix: In Da Club - 50 CentAnthony NashWritten and produced by 50 Cent, Dr. Dre, and Mike Elizondo, In Da Club is a timeless floor filler that was also an instant success.

It peaked at number 1 for 9 weeks on the Billboard Hot 100 and remained on the chart for 30 weeks. In March 2003, it broke a Billboard record as the “most-listened-to" song in radio history within a week.

Billboard also ranked it as the number 1 song for 2003. In 2009, the song was listed at number 24 in Billboard's Hot 100 Songs of the Decade and it was listed at number 13 in Rolling Stone's "Best Songs of the Decade".

Let’s look at the genius approach taken to create this smash hit that will go down as one of the most legendary hip-hop records of all time.

How Does this Song Stay Relevant Decades After Release?

In Da Club is the only ‘birthday song’ that’s gained any traction since Stevie Wonders ‘Happy Birthday’ (excluding the original of course).

In an interview, 50 Cent said that he purposefully made the song about being in the club and celebrating a birthday to help it stay relevant forever, as every day there is a person having a birthday in a club.

Copying this approach would be pretty futile, however, sharing a similar mindset and concocting ways to make records that are timelessly relevant can help improve the longevity of your art.

Structure

Catchy intros send crowds wild. In Da Club is one of the best examples of that.

To this day if you hear In Da Club drop in a bar, club or festival, people will start screaming and singing along immediately.

50 Cent jumps right into the chorus after the intro presenting the hook of the song after the first few bars.

This helps establish the listener’s connection with the song and they’re more likely to engage with it rather than ignoring or skipping it.

The beat is almost like a single loop from start to finish with a few arrangement and instrument changes peppered in.

The lack of a first verse makes the bridge feel like it enters quite early in thetrack, which also helps break up any monotony in the production.

Stereo Spread

This track was produced and mixed by Dr. Dre, but we can see here that he took a different approach to his mix of ‘Still Dre’.

This is a much wider arrangement of the different channels and even has a considerable amount of bass width.

The synth and string stabs are mixed so cohesively that they almost sound like a single channel.

When you’re laying sounds, you’ll know you’ve nailed the mix when the layers appear to be one single sound source.

The mono guitar 16th notes don’t occupy a large range of frequencies but also don’t compete with the other instrumental elements.

It slot’s into the mix in a complementary way, adding both body and groove to the mix.

Dre is brilliant at creating a clear vibe with the beats and music he produces.

This beat has very cool and credible energy that resembles the image that 50 Cent was trying to portray.

The mix is on the darker side, with the hi-hats not being much brighter than the strings.

There’s very little high-end presence above 15kHz which contributes to this dark, gritty and edgy vibe.

If you compare 50 Cents vocals to the vocals in SICKO MODE you hear a dramatic difference in brightness.

When you’re going for a darker sound, all the channels in the mix have to reflect that.

The wide vocals elevate the lyrical experience.

You feel like you’re being spoken to from every angle and it’s a very immersive sensation.

To do this in your own productions, you can double track audio and pan one left and one right by an equal amount.

The most effective way to do this is to record two separate takes of audio, (or if it’s a synth you can slightly alter the patch).

If neither of those options is possible you can use EQ and effects to create a difference between the two channels.

You could also use separate delays and reverbs on each channel to give each their own sense of time and space.

Note: If you simply duplicate the channel and add no effects or alterations then the audio will sound like it’s coming from the phantom center (not wide).

ANIMATE

You can also use the ‘Grow’ module in ANIMATE by Mastering The Mix to increase the width of the selected frequencies using a psychoacoustic precedence effect. Grow lets you spread specific frequencies SUPER wide in a dynamic way like never before.

Low-Frequency Analysis

Using LEVELS I can filter the low frequencies in this track and see how they’re positioned in the stereo field. Below we can see that there is a lot of stereo information below 239Hz showing up in red in the vectorscope. A touch of stereo width in the bass can be ok, but when it starts creeping out into the red zone, the audio is more susceptible to phase cancellation when played back in mono. In small doses, it might not destroy the mix but if the low frequencies get too wide then you might have a disappointing moment when you hear your music sounding thin on a club sound system.

What Did We Learn?

Concocting ways to make records that are timelessly relevant can help improve the longevity of your art.

Catchy intros send crowds wild.

Getting to the chorus quickly can help engage your listener reducing the risk that they ignore or skip your song.

You’ll know you’ve nailed the mix when the layered channels appear to be one single sound source.

When you’re going for a darker sound, all the channels in the mix have to reflect that.

Wide vocals elevate the lyrical experience. You feel like you’re being spoken to from every angle and it’s a very immersive sensation.

Now It's Your Turn!

Deconstructing a mix like this is a great way to make real improvements in your music production. One of the six cheat-sheets in my eBook ‘Never Get Stuck Again’ is a cheat sheet to help you decode any mix in minutes.

]]>
https://www.masteringthemix.com/blogs/learn/fixing-a-mix-using-a-dynamic-eq2019-02-26T10:00:00+00:002019-02-26T10:00:00+00:00Fixing A Mix Using A Dynamic EQTom Frampton
Producing a song is a super creative process. When you get into the zone and everything starts to flow it feels amazing. In this state of mind, you probably won’t be paying close attention to some of the finer details of the mix.

You wake up the next morning and excitedly listen to your song… Now that the dust has settled from your glorious flow, you’re starting to notice a few issues:

“My mix sounds too harsh”

“My mix is too muddy”

“That lead sound is super resonant”

“I thought the track had more energy”

etc…

If you’ve ever said those statements in a mix session, then this post is for you. I’ll run through some examples of how you can fix sonic issues in your mix with a dynamic EQ.

How To Fix A Harsh Sounding Mix

The first step is to locate which channel or channels are causing the issue. In most cases, harshness is caused by just a couple of channels, so it’s not always a good idea to try and fix the problem on the master fader.

Try muting a few usual suspects one by one until the harshness disappears. Mute the vocal, the hi-hats, the prominent synths, string parts or any other channels that you think might be making your mix sound harsh.

When you’ve identified the trouble-maker, pull up a dynamic EQ and take a look at the frequency analyzer. As you can see in the example below, there is a clear build-up around 4kHz. That’s the area I want to ‘control’ rather than cut completely.

I’ll keep the actual gain of the EQ band as 0dB and set the dynamic target to -10dB. I’ll then set the threshold so the dynamic EQ only kicks in when the mix is sounding too harsh. When the signal doesn’t surpass the threshold, the channel will sound unchanged.

Had I just used a static parametric EQ, the audio might have sounded unnatural with a substantial cut around 4kHz. The dynamic EQ lets me deal with the problematic audio ONLY when the problems arise.

How To Fix A Muddy Mix

Step one is to locate the channels that are beefing up your mix too much and decreasing the clarity. I like to keep 150-450Hz nice and spacious for my bass, low vocals, snare body and most important synth.

Things like strings, pads, guitars and other ‘supporting’ instruments can be carved out in this region if the mix is sounding muddy.

In the example below I’ve identified a ‘supporting’ pluck synth that adding a bit too much meat to the production. I’ve carved out a cut (blue dot) around the fundamental frequencies (lowest note seen on the spectrum) and added an extra dynamic dip to push it a little lower on the louder notes to further control the sound.

By repeating this on a few offending channels, you’ll hear clarity come to your mix and things will start to fall into place in a subtle but pleasing way.

If your mix is still sounding a bit muddy, then you might need a reduction of some of your more dominant channels. There is a cunning way to keep the tonal balance of your dominant channels whilst reducing the mud…

Let’s a take a bass synth as our example. Pull up your dynamic EQ and create a band around the muddy area (in this case 100Hz). Create a fairly wide cut until you feel more clarity has been added to your mix. Then bring your dynamic band UP so it sits on 0dB (If you cut 4dB, add a dynamic boost of 4dB).

Tweak the threshold so when your bass is playing, it dynamically hits the 0dB mark in that band. But reduces down to the EQ band target when the signal is lower than the threshold. This will have an amazing decongesting effect on your bass whilst keeping the impact and tonal balance on point.

How To Transparently Fix Resonance In A Mix

This is one of my favorites… It’s super effective and if done strategically throughout your mix you can find yourself with a very pure and natural sound.

The resonance might only come into the mix at a certain point depending on the musical note, so you don’t want to statically cut the frequency. A dynamic EQ will be triggered only when the resonance kicks in, leaving the rest of the performance with its original tonal balance.

The trick here is to find the resonance and set the EQ band to 0dB. From that point, bring down the dynamic band with the thinnest possible Q so it’s focused on that specific frequency. Keep tweaking the dynamic gain reduction (keeping the main EQ at 0dB) until you feel like the resonance is under control.

It’s important to use your ears to set the exact amount of reduction when fixing resonances. There is a point the resonance is controlled and it’s no longer a problem, reduce it further and you’ll find that the channel will sound thin. Some resonances are very subtle and only require adjustments of a few dB for the sound to be natural.

How To Add Energy To A Flat Mix

Nothing is worse than listening to a song you’ve created and having an unexciting listening experience. You want the music to lift you off your seat and fill you with emotion. Dynamic EQs can inject energy into a flat mix in a frequency specific way.

A subtle approach would be to add a touch of upward dynamic movement across your master channel. Create a few EQ bands in key frequencies of your track. Keep the band gain a 0dB to maintain the sonic balance of your mix. Then increase the dynamic gain until you feel the mix starts to lift.

Tweak the frequency of each band until you feel the dynamic EQ is accentuating the best elements of the mix. Your mix should start to feel a little punchier.

If your mix is still sounding flat, then you might need a stronger tool. Our plugin ANIMATE is a super-precise and versatile swiss army knife of tools to get your mix jumping out of your speakers. Dynamic EQs often lack attack and release settings as well as ratio and knee. If you like having more control over your upward expansion then ANIMATE is the tool for you. You can try it free by entering your email at the bottom of this page.

Conclusion

Dynamic EQs are great for shaping and controlling your mix. Find the problem in your mix and think carefully about the best approach to solve the issue. Be sure to combine cuts and boosts with the dynamic target in a purposeful way to get the sonic results you’re shooting for.

]]>
https://www.masteringthemix.com/blogs/learn/how-to-get-a-great-tonal-balance-in-your-mix2019-02-19T15:47:00+00:002019-02-19T15:47:56+00:00How to get a great tonal balance in your mixTom FramptonOne of the most challenging aspects of music production is mixing a track to sound well balanced across the frequency spectrum.

Having your high-frequencies too loud in the mix will make your track sound harsh. Too much energy in the mid and low-frequencies will make your track sound muddy. Having too little energy in these frequency ranges is equally as problematic.

This article will give you the information you need to get your mixes sounding as well balanced as your favourite mixes.

The Challenge & The Solution

Nailing the tonal balance in a mix is very tricky when you don’t have acoustic treatment and monitors with a great low-end response. This is why so many home-studio enthusiasts struggle.

To solve this problem and help more people get their music sounding as good as their favourite tracks, we created a plugin called REFERENCE.

REFERENCE shows you visually what you would be able to hear if you were in a world-class studio. This isn’t a simple ‘slowed down’ frequency analyzer like many other plugins out there that claim to do the same thing as REFERENCE. It’s a complex algorithm created to specifically identify how the human ear perceives certain frequencies relative to the balance of the whole mix. With this information, we can make informed decisions regarding the tonal balance of a mix.

REFERENCE has a unique (and incredibly effective) way of showing you the perceived level in various frequency bands. The white level-line will drift into the lower half if those frequencies have less perceived volume than in your selected reference track. It will drift into the upper half if those frequencies have more perceived volume than in your selected reference track.

Balancing the low-end

Fire up REFERENCE (click here to download the free trial) as the final insert on your master channel (unless you use any speaker/headphone correction software in which case REFERENCE should go before that) and drop in your reference tracks.

Loop the chorus of each of your reference tracks, as well as the chorus of your own production from within your DAW. Hit the ‘level-match all tracks’ button (top right corner of the waveform in REFERENCE) to make all the tracks play back at the same perceived volume, you’re now set up and ready to nail your low-end.

In the example above, the ‘low-end’ white level-line in REFERENCE has drifted into the upper half. This tells us we should reduce the low-frequencies for a tonal balance that better matches our reference track.

Take your favourite EQ and use a low shelf to match the tonal balance to your reference track.

Balancing the mid-frequencies

The mid-frequencies range from around 250Hz to 3kHz. There are various sonic characteristics in this range; mud, warmth, thickness & harmonics to name a few.

If REFERENCE is encouraging you to boost frequencies in the mid-range, try to go for a broader ‘Q’ (the ‘Q’ is the bandwidth of the EQ). This is a more musical approach as a thin Q boost could give you a nasty ‘boxy’ resonance that would make the audio sound nasally.

If REFERENCE is encouraging you to cut frequencies in the mid-range, try to go for a thinner ‘Q’. This surgical approach can help you cut the frequencies you don’t want in your mix without dramatically altering the sound.

Balancing the high-frequencies

The high-end has the broadest frequency range of about 3kHz to 20kHz. I like to approach the high end slightly differently when I’m fine tuning the tonal balance.

Boosting the high-frequencies

Using an upward expander to boost high-frequencies is a musical approach that can increase the punch whilst minimising harshness.

Upwards expansion increases the volume of signals over the threshold, giving your audio more dynamic range in a transparent way.

In this example, I’ll be using our plugin ANIMATE to dial in the expansion. The Expand module in ANIMATE can bring out the glistening top end of any channel that needs a high-frequency boost.

Set the filterto only react to the highest frequencies of the sound. You’ll see the input audio glowing behind the filter, you can use this as a guide to select the relevant frequencies.

Set the ratioto around 1:1.5.

Change the thresholdso the audio you want to effect surpasses the threshold.

Now increase the amountuntil the audio feels like the top end is glistening.

This effect will increase the perceived volume of the channel, so you’ll want to reduce the output level. The output slider has an arrowto show you the perceived loudness of the audio before you added the effect. Click on the level match pointer and toggle the bypass button to make sure your mix decisions are improving your mix, not just making it louder.

To make a mix shine, you just need a few elements to occupy the highest frequencies. Too many channels with strong high frequencies will sound harsh and will give you and your audience ear fatigue.

Controlling the high-frequencies

Simply cutting the high frequencies will affect the tonal balance of your whole mix. That might be necessary, but more often than not you only want to reduce the louder high-frequencies. For this task, a dynamic EQ is perfect. You can set the band of the high frequencies you want to reduce, then set the threshold so the reduction only happens when those high-frequencies are too loud in the mix.

This is great as your cut doesn’t make the quieter parts of your track sound less bright. You can use REFERENCE again to help you set this EQ to the ideal setting.

Conclusion

Getting the perfect tonal balance is not easy and takes a lot of practice. With these approaches and using REFERENCE, you’ll speed up your progress and get closer than ever to the sound of your favourite tracks.

]]>
https://www.masteringthemix.com/blogs/learn/decoding-the-mix-uptown-funk-mark-ronson-ft-bruno-mars2019-02-06T13:04:00+00:002019-02-18T17:38:18+00:00Decoding The Mix - Uptown Funk - Mark Ronson ft. Bruno MarsTom Frampton
Not many songs can boast 14 consecutive weeks at number one in the US, as well as being certified diamond (selling at least 10 million copies). Uptown Funk won two Grammy Awards, including Record of the Year, and the Brit Award for British Single of the Year. It also has over 3.4 billion views on YouTube as of December 2018, making it the fourth most viewed YouTube video of all time.

So what helped this production connect with so many people? Can we define elements of the artistic brilliance and inject them into our own work? Let’s decode this mix!

Getting A ‘Live’ Feeling

Ronson isn’t a ‘fix it in the mix’ guy. When he’s recording audio, he’s trying to capture the best possible take and focusing on mic placement for tone. For example, he’s been known to use just one mic when recording drums (he did this when working with Amy Winehouse and Dap-Kings). This gives a limited amount of post-production control so the recording has to be as perfect as possible. Some may see this as a limitation, but Ronson feels this helps in two ways. Firstly, it gives the performer much more ‘intention’ and forces them to be more exact with their delivery. Secondly, it helps move the project forward as he has to commit to the sounds.

With this approach, there’s less scope for quantization and more ‘feel’ is injected into the music. Try shooting for great raw performances in your own productions rather than relying on fixing things in post-production.

The Mark Ronson ‘Tough Compression’ sound

“The one plugin I use the most is probably the Waves CLA-3A compressor. That was something I picked up from [producer] Jeff Bhasker when we were working on “Uptown Funk.” You throw it on a vocal or a bass track, and it makes everything a little tougher and also makes the mix just a little more centered.”

Why does the CLA-3A add a ‘tough sound’? Because it introduces an emulated analog ‘Total Harmonic Distortion’ (THD) which changes signal shape and content by adding odd and even harmonics of the fundamental frequencies. So as you start to introduce the peak reduction and gain using the compressor, you begin to introduce distortion, which gives the unique character to the sound. If you’re looking for more grit and character for a channel in your production, an LA-3A emulation could be a good compressor to go for.

Structure

Uptown Funk is a great example of a successful track that broke a few ‘conventional’ rules. For example, the chorus drops after more than 1 minute into the track, and the entire length is around 4 mins 30 seconds. With most pop songs dropping the chorus in the first 45 seconds and lasting around 3 minutes this is certainly an anomaly.

Let’s look at the structure and figure out how Ronson kept the listener engaged for the duration of the track.

Verse One: begins very sparse just drums and vocals easily hooks in the listener. After 8 bars bass and synth/guitar ad libs enter adding more interest.

Pre One: drops down again to drums and vocals. Rising FX lets the audience know something is coming.

Chorus One: has sustained synth which gives a very thick texture. Dramatic difference from Verse.

Verse Two: Sparse again. Push and pull of instrumentation engages the listener. Instrumentation grows in the second half to keep it novel.

Pre Two: drops down again to drums and vocals

Chorus Two: Identical to Chorus One.

Bridge: Sparse and building. Novel funky guitar not heard elsewhere in the track.

Chorus Three: Double Chorus. First identical to Chorus One & Two. Second has offbeat ride cymbal which changed the groove and keeps the listener engaged.

The main ingredient of the structure and instrumentation here is that every 8 bars something gets added or taken away from the arrangement. This keeps feeding the listeners desire for novelty and change. There’s never a moment where you feel sick of listening to the same passage or loop.

Listen to your own songs subjectively as a fan would, and try to ascertain if there are moments where you could introduce new sounds or drop a channel or two to add more interest and change to your arrangement.

Stereo Spread

Uptown Funk has incredible clarity whilst not being overly bright. When we break down the instrumentation during the chorus and show what frequencies are heard we can understand why.

We can see that there isn’t a lot of energy happening around the 150Hz-450Hz area. The kick and bass occupy the center of the mix in this range and the spoken ‘Doh’ bass is pushed wide so the frequencies aren’t conflicting. Notice also that the kick has a very small amount of stereo width, whereas the bass is completely mono. This allows for a super powerful and clear low end with a lot of punch and groove.

There are also no instruments outputting much energy into the very high-frequencies (20kHz and above). The big synth and the brass start to roll off around 15kHz. It gives the mix a very warm feel and also hints to the sound of the funk influences of the track (Earth Wind & Fire, The Gap Band, Sugarhill Gang, Zapp etc).

There are quite a number of channels around 500Hz to 5kHz, but the stereo separation is clear. The vocals are most central, the brass is a little wider, the big synth is a little wider still and the extra vocal ad libs are super wide.

The success of the stereo placement is how the engineer positioned the conflicting frequencies to maximize separation. Along with keeping the low end free for the kick and bass to rule.

Technical details

“The mastering engineer must have a musical as well as technical background, good ears, great equipment, and technical knowledge… He must understand what will happen to the recording when it hits the radio, the car, the internet, or the home stereo system."Source.

The CD master is super punchy and dynamic. It wasn’t over-compressed or over-limited to get the track competing with the loudest tracks in the charts. This means the transients have retained their natural shape which lets the track breath. It has a true peak of -0.32dBTP (decibels true peak) meaning that it won’t clip when played back through earbuds or speakers, which gives an elevated listening experience. It also minimizes clipping that can occur when the Wav file is transcoded to lossy formats for digital delivery through Spotify and iTunes etc.

What Did We Learn:

Limiting ourselves during recording can help force us to commit to sounds and get the best possible takes.

Analog emulation compression plugins can introduce great sounding harmonic distortion that gives character and grit to the sound.

Switching up the instrumentation every 8 bars can help keep the listener engaged.

Keeping the low-end free for the kick and the bass helps achieve a solid mix. When instruments fight for the same frequencies in a mix, use stereo width to increase separation and clarity.

You can make a hit record without trying to make it as loud as possible.

Aiming for a true peak below 0dBTP enhances the listening experience.

Now It's Your Turn!

Deconstructing a mix like this is a great way to make real improvements in your music production. One of the six cheat-sheets in my eBook ‘Never Get Stuck Again’ is a cheat sheet to help you decode any mix in minutes.

Our audio quality control application EXPOSEjust got a serious update! You can access the update by clicking the (?) icon in the top right corner of EXPOSE. You'll see:

This is EXPOSE version [1.0]

An update is available, click to download.

This is a free update for existing customers. Existing customers can access the update via their account areaon Mastering The Mix or from within EXPOSE itself by clicking the (?) icon in the top right corner of the GUI.

]]>
https://www.masteringthemix.com/blogs/learn/how-to-improve-your-mixdown-with-animate-3-sound-engineering-tips2018-11-29T11:47:00+00:002018-12-08T18:59:05+00:00How to Improve Your Mixdown with ANIMATE | 3 Sound Engineering TipsTom Frampton
In this video BigJerr from Warp Academy will be giving you an overview of our mixing and sound design plugin, ANIMATE. He digs into a track of his to show you exactly how he’s using the modules:

-Punch to add some mono impact to the mid signal of his snare drum

-Ignite to add some frequency specific distortion to his bass

-Grow to add some Hass effect style stereo width to his piano to layer up with the main super saw synth

This plugin will be a solid fit if you’re doing mixing or sound engineering and you want control over the stereo width of your sounds, distortion or upwards expansion only on certain frequency ranges or parts of the signal that exceeds a threshold, and dynamic control over stereo field.

Our metering plugin LEVELS just got a serious update! You can access the update by clicking the (?) icon in the top right corner of LEVELS. You'll see:

This is LEVELS version [1.2]

An update is available, click to download.

This is a free update for existing customers. Existing customers can access the update via their account area on Mastering The Mix or from within LEVELS itself by clicking the (?) icon in the top right corner of the GUI.

If you haven't tried LEVELS yet, you can download the free trial here.

Last year, I realized i was wasting precious time searching through audio files, emails, and texts.

Sending and receiving audio files every day was turning my downloads folder into a mess. Linking those files to the corresponding messages was also an extra effort. Keeping track of projects as they advanced from version 1 to version 5 was a nightmare.

I knew that having my audio files and messages in one place would minimize mistakes whilst saving me time and effort. It would also elevate the experience I could deliver to my clients. So I created Bounce Boss.

With Bounce Boss sending audio files and completing projects is easier than ever.

How It Works

Send Audio Files

Even the most complicated projects become simple to manage with Bounce Boss. Add the files needed for each track, input the essential info and hit send! You can use Bounce Boss with anyone, they don't need a paid account.

Player Page

Deliver audio with style and professionalism. Hit play (or spacebar) then jump seamlessly between the reference tracks, mixes and versions to monitor the progress.

Stream the audio in 'Fast' mode for immediate playback in MP3 320 kbps. Switch to 'HQ' for lossless quality.

Level match all the tracks in one click to help you and your collaborators make more accurate and informed decisions about the audio.

Comments

Keeping track of feedback has never been so organised and efficient. No more jumping between long email threads and finding the corresponding files in various places.

Comments can be linked to a timestamp or loop, allowing all collaborators to immediately preview the specific part of the track that the comment relates to. This saves time and removes any possible confusion.

Collaborators

Manage all collaborators in one place. Keep track of who has and hasn't seen the latest comments or files.

Conclusion

Bounce Boss was shaped by its future users from the start. People transferring audio across the internet were asked to rank which features they would find most useful in an audio file sharing platform. This allowed the Bounce Boss team to focus on things that would genuinely make a positive difference to people’s workflow.

Prior to Bounce Boss’ official release, it had already successfully facilitated major label projects... With the label executives finding it so useful they asked if they could invest in Bounce Boss.

We knew Bounce Boss was ready to start helping anyone and everyone involved in music collaboration.

]]>
https://www.masteringthemix.com/blogs/learn/reference-version-1-1-0-released2018-10-31T15:13:00+00:002018-10-31T15:31:18+00:00REFERENCE version 1.1.0 released!Tom Frampton
Our killer referencing plugin just got a serious update! You can access the update by clicking the (?) icon in the top right corner of REFERENCE. You'll see:

This is REFERENCE version [1.02]

An update is available, click to download.

Click the link to get the latest version of REFERENCE. If you haven't tried REFERENCE yet, you can download the free trial here.

LIST OF UPDATES

- Automatic Track Alignment improvements

- New Feature! Manual Track Alignment

If the reference track is not an alternate version of the track you’re working on the Track Align button will change to show a slider. Click to open the sample slider.

The slider direction works as if you were moving audio from within your DAW. Move the slider to left to move your reference track forward, and move it to the right to delay it. Click the ‘Mix’ button to preview both your Original and Reference at the same time. This can make it easier to line up the two tracks manually.

To re-open the sample slider hover your mouse over the slider icon. Take your mouse off the wave transport to minimise the sample slider.

]]>
https://www.masteringthemix.com/blogs/learn/decoding-the-mix-7-all-star-collaboration2018-09-25T02:00:00+01:002018-09-25T02:00:00+01:00Decoding The Mix #7 - All-Star CollaborationTom Frampton
When the biggest artists come together to create a song the result is destined for success. Heavy-hitters Mark Ronson, Diplo (the duo are calling themselves ‘Silk City’) and Dua Lipa have released a collaboration that has gained over 10.5 million streams on Youtube in less than a week. In this post, I’ll be taking a close look at ‘Electricity’ by Silk City to see what we can learn from these international superstars. Hopefully, you’ll take away something new to improve your next production.

Structure & Arrangement

Electricity is a great example of talented producers taking influence from a more underground genre (90s House) and making it accessible to the commercial market. The infographic below details the structure and arrangement of the production. The song is driven by the piano and vocals which are heard throughout. The bass and claps are also prominent, dropping out occasionally to add suspense and change. The kick, which is usually a driving element of a house inspired track, is less prominent and is only heard in the choruses and second verse. The shaker, guitar and ‘vibe noise’ are extremely sporadic and are barely heard in the mix. These more background elements help add novelty to the mix and keep the listener engaged. This is further accentuated at the end of the song where a live kit is heard for just the final 4 bars, sounds that were not introduced at any other point in the track.

House tracks are typically over 6 minutes long and are never in a hurry getting from section to section. Electricity has 16 bar verses, which are a little more drawn out than other ‘commercial house’ tracks (such as the 8 bar verses in ‘One Kiss’ by Calvin Harris). However, Electricity still hits the golden rule of getting to the chorus before the 1-minute mark. If you create commercial music and you’re not getting to the chorus within 60 seconds, it’s a tried and tested technique that helps audiences stay connected whilst listening to your track.

Big Entrance

I love it when a chorus comes in with an epic entrance. When I’m listening to music I want to have that moment where the track builds and climaxes. For that to happen the sections have to contrast. If both the ‘build-up’ and the ‘climax’ have very similar technical properties, then I won’t notice much of a difference. There are a number of ways you can make the chorus feel more epic than the build-up. The simplest is making the chorus louder. Another option is to make the chorus much wider than your build-up. In Electricity, they went for loudness and a change in dynamics. The build-up is quiet and dynamic, whereas the chorus is loud and very compressed. As we can see in the visual below, our plugin LEVELS is showing that the build-up is 12.4DR and the chorus is 7.5DR. This dramatic change gives a clear signal to the listener that the music has moved to the new section. It makes it easy to listen to and digest.

Balance In The Mix

I was surprised to see Mark Ronson take on a ‘House’ project like this given that he is more known for his live pop projects. His track ‘Uptown Funk’ with Bruno Mars was a massive hit, so I wanted to see how the tonal balance compared between the two tracks. Both are commercial tracks, but they can be classified into different genres. With that in mind, we can have an idea of what is expected ‘tonally’ from a house track compared to a commercial Pop/Funk/Soul production. In the visual below our plugin REFERENCE is giving us an insight into the difference in tonal balance.

Separation In The Mix

One thing that stood out to me in the mix is just how mono the kick and bass are. In a lot of mixes, you’ll find the upper frequencies of the kick and bass edging out into the stereo field. If you listen to just the ‘sides’ of the mix you won’t hear even a hint of the kick and bass. Taking this approach, you can be absolutely sure that the kick and bass will translate perfectly when heard on a club sound system.

The kick and bass along with the vocals and piano make up the four main elements of the mix. All of which are positioned centrally in the mix. The piano has some width to give space to the vocals which occupy similar frequencies. The other elements which jump in and out of the arrangement are placed wider in the mix. This helps them add interest to the production without compromising the attention placed on the main elements. Using the stereo width in this way helps avoid conflict and battling frequencies.

Technical Analysis

I have only positive things to say about the technical details of Electricity! Let’s start with the loudness. Youtube only turned down the track by 2dB (decibels), meaning the uploaded audio was sitting around 10LUFS (loudness units full scale) integrated. This is a very conservative level and shows that they didn’t just take the uninformed approach that ‘louder is better’. Our quality control app EXPOSE is also showing no phase issues and there are no areas where the left/right balance is noticeably unequal. EXPOSE is showing a loudness range of 7.9LU (loudness units) which shows that there is a considerable difference in the loudness between the various sections of the track. More static loudness ranges can be found in hip-hop and generally sit around 3LU. Anything over 6LU can be considered dynamic.

The track streams on the louder side of the average for YouTube and Spotify. I’ve been doing some more testing and it looks like tracks with a more dynamic loudness range play-back slightly louder. So if you do want your music to play back a touch louder, try increasing the difference between the loudness of your verses and chorus’.

What Did We Learn?

- Getting to the chorus within the first 60 seconds helps to keep hold of your listeners attention.- You can use a difference in dynamics to add contrast between sections.- House music often has a fuller low-end and more controlled high-frequencies compared to commercial pop/funk/soul music.- Keeping the kick and bass totally mono will ensure that it will translate well on a club sound system. - Having a more dynamic loudness range can make your tracks stream a little louder.

Now It's Your Turn!

Deconstructing a mix like this is a great way to make real improvements in your music production. One of the six cheat-sheets in my eBook ‘Never Get Stuck Again’ is a cheat sheet to help you decode any mix in minutes.

]]>
https://www.masteringthemix.com/blogs/learn/speed-stem-mastering2018-08-27T20:30:00+01:002018-08-28T18:07:45+01:00Speed Stem MasteringTom Frampton
Being more efficient in the studio can help you create more music in less time. In this video, you can watch me stem master an EDM track by ‘Masteria’ in under 30 minutes. This is an exercise in utilising systems to speed up your workflow.

Below is a list of four simple systems that I used in this video. Had I not had these systems in place, it would have taken me at least 20% longer. If music is your business (or you want it to be), you can’t afford to skip these simple steps.

Have templates ready to accommodate the majority of your projects.

Have plugins bypassed but ready in the project so they’re there if you need them. Ideally close to your favourite settings, or flat and ready for any adjustments.

Have presets saved in REFERENCE so you always have your favourite reference tracks at hand.

Always check the mastered file with EXPOSE to make sure you haven’t got any technical issues with your master before you upload it or send it to a client.

]]>
https://www.masteringthemix.com/blogs/learn/decoding-the-mix-6-sync-able-synth-pop2018-08-21T18:00:00+01:002018-08-21T18:00:00+01:00Decoding the Mix #6: Sync-able Synth-popTom Frampton
M83’s ‘Midnight City’ is one of those songs that is so catchy, interesting and cool that it’s constantly being used in adverts and TV programmes. What is it about this epic production that makes it so valuable to brands? And how can we use this information to make our music more attractive to the lucrative business of music sync?

What is 'Music Sync'?

‘Music Sync’ is short for ‘music synchronization license’ which is a license that allows the licensee to use a composition in conjunction with film, TV, adverts, video games, website, movie trailers or any other visual media output. The license is granted by the holder of the copyright of the music that is to be licensed. The fee paid for this type of license can be anything from a few hundred dollars to tens of thousands of dollars. It can be the largest revenue stream for some artists.

Unique & Brand-able Sound

Brands want to stand out and be unique. They want to portray themselves as cool and desirable, and that needs to come across in the music they use.

The lead synth heard during the intro of ‘Midnight City’ (and throughout the track) is probably the most memorable and significant element of the song. It’s a simple but unique melody and the sound is like nothing you will have heard. This sound was created by Anthony Gonzalez (leader of M83) singing the melody then smashing it with some heavy distortion.

This creative approach adds the human touch that electronic music often lacks. During your next production see if you can create an instrument out of your own voice. Manipulate the sound so it’s not obvious that it’s a vocal recording. This will make that sound totally unique to you, whilst adding an extra human touch to your production.

Verse Vs Chorus Dynamic Range

The verse in ‘Midnight City’ is quite sparse and dynamic featuring the punchy drums and bass parts. This contrasts heavily with the dense and rich texture of the chorus where the sustained synths fill the speakers. As you can see from the LEVELS images below, you can see the difference in dynamic range is considerable.

To make an impact with music, your sections should have a contrast. In previous Decoding The Mix posts I’ve found that a lot of tracks go for a more mono verse and a wider chorus. But this track goes for a relaxed and dynamic verse and then hits the listener with a massive wall of high-energy sound for the chorus. The effect is that the chorus feels epic. Brands want to be seen as epic, so this is a good approach to make your music more sync-able. Remember that if both your verse AND your chorus are epic, then neither feel epic. One has to be more epic relative to the other.

Stereo Spread

This production has a lot of focus in the mids. The synths are massive, warm, bright and take up a lot of space. Finding a place for everything in the mix must have been a challenge. Somehow they managed to get the vocal to cut through whilst being immersed in a swirling swamp of reverb, modulation, and delay. The infographic below shows the rough placement of the different elements of the mix within the stereo spectrum. The main takeaway is that the synths that occupied the same frequencies had different widths. They also seem to find a different synth to fill every possible space in the stereo field and frequency to get the fullest sound possible.

Structure and Arrangement

‘Midnight City’ has a lot going on. During the chorus, the instrumentation is densely packed. The final chorus and outro is a crescendo where almost all of the elements come together for the first time. The stripped back verses have only 4 elements at times and often build the instrumentation to lead up to the chorus so it's not such a shock when it drops.

Often in music production ‘less is more’. ‘Midnight City’ is a great contradiction of the ‘less is more’ advice. The packed chorus sacrifices the punch of the drums, but it works.

What Did We Learn

Create cool and desirable music for a better chance of earning from Music Sync opportunities.

Recording your own voice then warping it into something new can give you a unique sound with a human flavor.

You can use dynamic range as a way to differentiate your verse from your chorus.

If you have a densely populated mix, make sure each element is occupying a different frequency OR stereo space.

Now It's Your Turn!

Deconstructing a mix like this is a great way to make real improvements in your music production. One of the six cheat-sheets in my eBook ‘Never Get Stuck Again’ is a cheat sheet to help you decode any mix in minutes.

]]>
https://www.masteringthemix.com/blogs/learn/the-perfect-monitoring-levels-for-your-home-studio2018-08-12T00:30:00+01:002018-08-12T00:30:00+01:00The Perfect Monitoring Levels For Your Home StudioTom Frampton
We hear frequencies differently at various volume levels. We hear less bass when it's quieter, and more bass when it's louder. In this post, I’ll walk you through 5 steps you can take to calibrate your speakers to a set monitoring level suited to your studio space. This will help you get the most accurate response from your monitors in your studio every time you go to mix or master a track. Your ears will become used to the level you set and you will intuitively recognize when your tonal balance is off or if the track is too loud or too quiet.

1. You first need to decide what digital level you want to mix to. If You’re mastering audio for streaming platforms you might calibrate to around -14LUFS. If you’re making club music you might choose a figure closer to -9LUFS.

2. Now you’ll need a pink noise file for the calibration. The pink noise file should match the level you chose in step 1. The pink noise produces an equal amount of noise across the frequency spectrum. If you have an untreated room, you can restrict the pink noise to 500Hz-2kHz to minimize low-frequency standing waves or reflections. Open up a test oscillator in your DAW and select the pink noise setting. You can use LEVELS to adjust the pink noise to the ideal LUFS value of your future music projects. In the example below, the test oscillator outputs audio at -9LUFS when the output is set to -3dB and -14LUFS when the output is set to -8dB.

3. An SPL meter is now needed to measure the acoustic sound-pressure level produced by your monitors. You’ll need an SPL meter with a C-weighted filter option, which is flatter than the A-weighted response which is commonly used for general measurements. The SPL meter will also need a ‘slow’ or ‘averaging’ mode. These can be picked up for around £15/$20 on Amazon. Some phone apps can also be surprisingly accurate.

4. Now you need to work out at what volume you want to listen to your audio in your studio. 85dB SPL used to be a common suggestion for monitoring levels, but this figure was intended for larger spaces such as a cinema. This level is close to the more flat portion of the equal loudness contours (a more accurate update of the Fletchure-Munson Curves). It was later discovered that the method used for measuring the pink noise signal was slightly inaccurate and the reference level for cinemas was changed to 83dB SPL. This level would be super loud and overwhelming in most home studios. Most home studios are smaller than 142 cubic meters, so 73-76dB SPL C is a more appropriate target. Below is a table created by Sound On Sound with recommendations based on room size.

5. Now we bring everything together, play the pink noise file and adjust the monitors to match the ideal reference level for your studio. Let’s say you wanted to master audio to -14 LUFS and you had a small home studio between 42 and 142 m³. You would want the -14 LUFS pink noise file to sit at around 76dB SPL (C).

To take the measurement:

Place the SPL meter in the listening position. Most SPL meters are designed to have the microphone pointed to the ceiling rather than the source. Check the instructions to be sure.

Set your monitor volume to it’s lowest setting using your monitor controller or the actual volume on the monitors if you don’t have a controller. Make sure your output and master fader in your DAW are set to 0dBFS and play your pink noise audio.

Make a note of the volume so you can quickly set the same monitor level in the future. It’s improbable that you will use this level 100% of the time. It’s always useful to hear your mix super quiet to make sure the right elements are poking through the mix. You also might want to play your music loud from time to time for fun or to impress your clients and friends.

You should now have monitors that are calibrated to a loudness that works with the size of your room and the loudness of the music you want to create. You’ll get a more balanced frequency response which will help you consistently dial in the right amount of bass to your mixes.

You can also use this new monitor level to help you find a similar level in your headphones. (It’s extremely difficult to accurately measure loudness on headphone at home.

Whether you create rock music or not, you’ll certainly learn something new from these geniuses! Muse are without a doubt one of the most popular and respected rock bands of all time. One of their biggest hits was ‘Uprising’ from their Grammy-winning album ‘The Resistance’. In this blog, I’ll be decoding ‘Uprising’ to see what techniques we can take away and use in our next mix.

Stereo Placement

The stereo placement Infographic below shows a lot of wider elements, but the majority of ‘Uprising’ is mixed very Mono. The main components heard throughout the mix (Bass, Drums, and Vocals) are very central. The elements that jump in and out of the mix (backing vocals, lead guitar, synths) are super wide. If you listen to this mix in Mono you’ll hear that the fundamental sonics are almost identical.

The wider elements that we see (Synth 1, Synth 2 and Lead Guitar) are never playing at the same time (You’ll see this in the arrangement infographic later in this post). It could be a problem if they played simultaneously in the mix as they occupy similar frequencies and have similar timbres. This can cause masking and can be confusing for the listener.

Getting The Muse Sound

Uprising was mixed by mixing legend 'Mark 'Spike' Stent’. Stent has 34 Grammy Nominations with 5 wins, one of which he won mixing this track. The mixing was done in Muse’s private studio which has an SSL G-series console at its heart.

DRUMS

The recorded audio for this track was so good that Stent didn’t use any samples. He meticulously analyzed each drum and room track to check the timing and phase. When monitoring, he’s constantly flipping phase as he believes it’s essential for getting a big sounding mix through all playback systems.

Stent loves to fine-tune and sub-group. He’ll have 4 kick tracks, with effects on each channel, which are then sub-grouped and compressed to glue it all together.

Dominic Howard (the drummer of Muse) likes to make sure his awesome fills punch through the mix. So Stent ensured the toms worked tonally with the track and automated the hits to bring them into focus.

The drums are heard almost constantly through the track. Stents attention to detail ensured that the drums sounded huge, punchy and tight from start to finish. Here are the tools he used to get the drum sound: Waves SSL Channel, SSL G-Series desk EQ & dynamics, Metric Halo Channel Strip, Chandler EMI TG12413 (plug‑in) & TG1 (hardware).

BASS

There were 5 bass tracks recorded in total for Uprising. A bass synth, a DI, a bass sub, and two tracks of bass effects. It takes a genius like Stent to make this many bass tracks work together. Learning how to deal with phase really pays off in these situations.

The bass was EQed and compressed with an SSL plugin and was occasionally sent to a distortion plugin (Sansamp) to help differentiate different sections sonically. The top bass was gently limited using the Purple MC77 limiter to control the dynamics. The other 4 bass elements were sub-grouped and occasionally sent to the Sound Toys Filter Freak plug‑in for sonic variation between sections.

VOCALS

Matt Bellamy’s vocals are super upfront and clear without being harsh. How was that achieved? A lot of dynamic control and an interesting approach to de-essing. Stent used a Waves de-esser to heavily scoop out the harsh sibilances, then followed it with a Dbx 902 de-esser to just ‘tickle the signal’.

To control the dynamics Stent used a Teletronix LA2A, a Universal Audio 1176, follow by a Standard Audio Level-or. This succession of compressors allows Stent to sculpt the dynamic, tone and attitude of the vocal with great precision.

Stent went through each backing vocal track (there were many) to ensure the timing was on point. he also individually tweaked a de-esser to suit each recording perfectly. Stent is certainly a pro who doesn’t cut corners… The result is an incredible mix and another grammy sitting in his studio.

Structure & Arrangement

Muse is known for being influenced by the compositional traits of classical music. You can hear it in the harmonies and melodies used in their epic and cinematic songs. The instrumentation in Uprising is very rock based with a few synth sounds thrown in to give it a modern touch.

The Bass relentlessly drives the track from start to finish without taking even a bars rest. Most records I’ve analyzed intentionally cut the bass during the build-up or a verse to add variation, so this stuck out to me as a unique approach. Similarly, the drums are almost completely constant, resting for just 3 bars. This rhythm section along with the vocals are the three main elements that are heard throughout the track. The other instruments come in and out sporadically to add sonic variation to the different sections. I particularly liked how the synth sound changed from the first half of the track to the second half. Something I also noticed when I analyzed Calvin Harris - One Kiss.

Just by looking at this visual you can see how Verse 2 has more instrumentation than Verse 1. Chorus 2 and 3 are also fuller than Chorus 1. This makes each successive section more interesting than the last which keeps the listener gripped to the song.

(The infographic shows the structure of the radio edit purchased from iTunes which was 3mins : 35secs long. The YouTube edit was 4mins : 9secs and the Spotify version was an epic 5mins : 3secs).

Verse vs Chorus Width

This is something I’ve seen in almost every track I’ve analyzed. The chorus is mixed wider than the verse. This makes the chorus feel larger and more encapsulating than the verse. This can only be achieved if the verse is mixed fairly centrally to create the contrast.

Technical Analysis

Mark ‘Spike’ Stents thorough work correcting phase issues is displayed in the EXPOSE screen grab below. The correlation heat map is very focused towards the right-hand side ‘+1’ label. This mix would translate to mono very well!

The constant bass throughout the arrangement gives this track a loudness range of around 3 to 4. This means that the different sections have a very similar loudness.

iTunes

When I dropped ‘Uprising’ (purchased from iTunes) into EXPOSE, I was expecting some horrific peaks… Considering it was released in 2009 when MFiT wasn’t a well-known initiative and streaming normalization wasn’t a factor that many engineers considered. However, as you can see, the track only peaks above 0dBTP (decibels True Peak) on three isolated occasions. This is better than many tracks in iTunes Top 100 today! I suspect that the track was mastered to 0dBTP using a high-quality true peak meter and the peaks were introduced when converting from a high-quality file to AAC (Advanced Audio Codec) for iTunes.

Spotify

This track streams a little lower than the -14 LUFS average. This could be because of the considerably un-dynamic loudness range. Though this is just speculation based on my research on normalization recommendations described in EBU R-128.

Muse’s music has a particularly thick and compressed sound, proven by the fact that Spotify turned Uprising down by 5dB. Let’s see with their next album if they go for a more dynamic approach to optimize their music for streaming platforms. Perhaps they’ll stick to the sound that has worked for them for over a decade.

UPDATE : MUSE just released an official music video for their song 'Something Human' (new album coming Nov 2018)... Youtube plays the audio 5.1dB quieter than the original volume, so it looks like MUSE are sticking to their loud and compressed sound.

YouTube

Youtube’s normalization process is super consistent. Yet another track coming in bang on -13 LUFS. For 2009, -8.6 integrated LUFS was fairly conservative, so ‘Uprising’ can be seen as a relatively forward-thinking production. Mastering to 0dBTP and a conservative loudness future proofs your music.

What Did We Learn

Automating distortion to different sections of a track can help differentiate the sections sonically.

Checking timing and phase on recorded drum tracks is absolutely essential for punchy drums.

The G-Series SSL console was used extensively to get the ‘Muse’ sound.

Switching synth sounds after the first chorus can help keep the progression of the track interesting.

Two de-essers can sound better than one.

Getting the mix to sound very similar when heard in both mono and stereo can help get a super solid mix that translates well in many playback scenarios.

Now It's Your Turn!

Deconstructing a mix like this is a great way to make real improvements in your music production. One of the six cheat-sheets in my eBook ‘Never Get Stuck Again’ is a cheatsheet to help you decode any mix in minutes.

]]>
https://www.masteringthemix.com/blogs/learn/a-tip-for-huge-and-clean-sounding-mixes2018-07-10T01:00:00+01:002018-08-04T21:40:30+01:00A Tip For Huge AND Clean Sounding MixesTom Frampton
I mix and master a lot of tracks, so I get to see first hand what the most common mixing mistakes are. I want to share with you a solution to an issue that many producers struggle with… If you’ve asked yourself questions like:

What bass sound will work best with this kick?How can I get my guitar to sound great with my vocal?Why does my mix sound messy?

Then read on for some answers.

Good Production Decisions = Good Final Mix

It seems that one of the biggest challenges that music creators face is that their channels are competing for the same ‘space’ in a mix, and they’re uncertain of how to fix this. Mixing is essentially just getting all the different elements of your song to work as well as possible with each other. In a great sounding mix, the channels will complement each other, not compete.

So how do you create a song with sounds that compliment each other?

The different elements within your mix will have different characteristics. For example, your kick has very different sonic characteristics to your vocal, so they complement each other very well. However, a kick and a bass can have very similar characteristics so they might compete and negatively affect the sound of your mix.

If we break down the characteristics of an audio channel we arrive at 6 fundamental attributes; frequency, rhythm, timbre, energy, stereo width, and volume. If you have multiple channels in your mix that are very similar in all 6 categories your music will sound cluttered. Some overlap is fine if it’s musical, but you’ll want to add elements that fill in the gaps in your production to give your track a great balance.

Keep this in mind for your next production. When you’re building your track, be purposeful with the new channels that you’re adding to your arrangement. Ask yourself:

“Is this new channel complementing or competing with what I’ve already got in my session?”

If it’s competing, then think back to the table above and tweak one or more of the attributes of the sound to get it working better in your mix.

Do you want ALL of my best songwriting, mixing and mastering techniques? Click the green button below. Everything I know that keeps my clients coming back is in that book.

]]>
https://www.masteringthemix.com/blogs/learn/decoding-the-mix-4-the-king-of-hip-hop2018-06-26T21:10:00+01:002018-06-26T21:10:52+01:00Decoding The Mix #4 - The King Of Hip HopTom Frampton

Dr. Dre has had a colossal influence over the sound and development of Hip-Hop. His golden touch boosted the careers of many household names such as Kendrick Lamar, 50 Cent, Snoop Dogg, Eminem… the list goes on. In this blog post, I’ll be decoding his iconic song ‘Still Dre’ to uncover his approach to music production. Hopefully we’ll come away with inspiration and ideas we can utilise in future productions.

Stereo Spread

In the infographic below, we can see that Dre has mixed the main elements very centrally. The kick, bass/cello, piano riff, vocals and snare are all mixed almost completely mono. The chorus vocals and synth pluck are the only elements that are mixed considerably wide.

There is a logical explanation for this. Dre started his music career DJing in a club called Eve After Dark. When he began making music, he would play his track in the club, see how the crowd reacted, then make tweaks in the studio. The club will have outputted the audio in mono, so his mixes had to sum to mono well.

If you listen to ‘Still Dre’ and toggle between stereo and mono, you’ll hear almost no sonic difference.

Dre’s mixing approach of favouring mono is further backed up by one of Dre’s protégés who discussed a technique he picked up from working with Dre.

Derik Ali:

“Dre always told me that if I could get something to sound amazing on crappy speakers, it’ll sound brilliant on normal speakers. I mix on just one Auratone, because I like specific elements of the mix to pop out, and listening in mono on that speaker really helps me define that. It’s difficult to assess your balance [in stereo], whereas when you listen in mono, you can gauge the true value of how everything sits in the mix.” Source.

This is a killer technique to help you get solid mixes. In addition to monitoring through a limited range speaker in mono, try turning the volume right down so you can barely hear the audio. If your main elements still feel balanced and you can still decipher the lyrics you’re on the right path to a great mix.

Structure and Arrangement

‘Still Dre’ repeats a very simple but infectious 2 bar piano riff throughout the song. The drum loop and bass/cello are also relentlessly driving the track without pause. Keeping it simple with these three unchanging elements allow the lyrics to become the focal point to grab the listeners attention. The high strings jump in and out of the arrangement to give a subtle change every 8 bars; though they aren’t unique to either the verse or chorus. Contrastingly the synth pluck only comes in during the chorus, solidifying the structure and progression of the song.

Thick Transients

‘Still Dre’ has a fairly sparse arrangement, but the sounds are so full-bodied that they fill the speakers and hit the listener in the chest. So how does Dre get that thick transient sound? When talking to Studio Sound in September 2001 (2 years after releasing ‘Still Dre’) Dre said: “I like the compressors on the SSL. I usually have the ratio up to about eight or 10 on a lot of things.” This approach to compression can get your clicker transients sounding thicker. I’ve run a snare through these settings to give you a visual how the audio can change.

Loudness Range

There is very little dynamic variation between the different sections of this track. It comes in at 1.8LU, which is about as low as it gets.

A low loudness range is common for Hip-hop, but analyzing a few others shows that 'Still Dre' has a more static loudness than many of the other Hip Hop hits… All except the one produced by Dre.

How Do New Tracks Compare?

When comparing ‘Still Dre’ to ‘God’s Plan’ by Drake, It was clear that there were a number of differences. Most notably, the tonal balance was almost incomparable. The screen grab of REFERENCE below shows that ’Still Dre’ has almost 6dB less perceived volume in the low frequencies than ‘Gods Plan’. The low frequencies are also much punchier in ‘Still Dre’. ‘Still Dre’ has more prominent and more compressed mid frequencies compared to Gods Plan. Not surprising considering we learned that Dre likes to turn the ratio up to 10:1 in his SSL. The high frequencies between the two tracks are fairly consistent.

It’s good to know and understand how trends have changed. You can then find the perfect balance of being influenced by iconic tracks and infusing the mixing trends of current chart-topping hits.

Technical Analysis

'Still Dre' plays back pretty quiet on streaming platforms. Youtube plays it back at -17.3 LUFS and the Stats For Nerds shows this is 4.1dB below their target level. However, they don’t increase the sound of quieter tracks.

It’s a similar story on Spotify with a playback level of -15LUFS int. On the quieter side of the spectrum and slightly below the average of -14 LUFS int.

What Did We Learn?

• Mixing in mono through one limited range speaker can help build a super solid mix.

• Simple arrangements allow for rap vocals to take center stage.

• Subtle but frequent changes in the instrumentation can keep the listener engaged.

• Using a high ratio on a compressor can thicken up your transients.

• Hip Hop often has a low Loudness Range.

• Modern Hip Hop tracks have a considerably different tonal balance to this iconic song.

• Youtube doesn’t turn quiet music up, so aim for about -13 LUFS int, or you could end up sounding too quiet.

Now It's Your Turn!

Deconstructing a mix like this is a great way to make real improvements in your music production. One of the six cheat-sheets in my eBook ‘Never Get Stuck Again’ is a cheatsheet to help you decode any mix in minutes.

]]>
https://www.masteringthemix.com/blogs/learn/how-to-mix-guitar-and-vocals-for-youtube2018-06-12T00:28:00+01:002018-06-12T00:28:33+01:00How To Mix Guitar and Vocals For YoutubeTom Frampton

How can you make a simple guitar and vocal arrangement stand out from the crowd and connect with your listeners? This blog post will give you some killer mixing techniques to get the best possible sound when you’re mixing guitar and vocals for YouTube.

Recording

Capturing the audio

Even with low budget equipment, you can maximize the results by making a few good decisions during recording.

Choose your recording location thoughtfully. Avoid recording in a room or space that has a lot of natural reverb. It’s better to record in a more controlled space and add the reverb within your Digital Audio Workstation.

Do a couple of rough test takes. Record some audio in a few different locations and see which one sounds the best with no production. This will give you the best starting point. You want your audio to sound as natural as possible as if the performance was happening right in front of you. This is quite difficult to achieve as different microphones have different characters and the room you record in also affects the sound. We’ll get into this in depth during the ‘Mixing’ part of this blog.

Wooden or stone floors below the microphone will make a recording sound naturally brighter. Carpet floors below the microphone will make a recording sound less bright.

Getting perfect takes

You might prefer to do the whole take in one go to give a live feel to your performance. Alternatively, you might do a number of takes and chose the best bits for the arrangement. Whatever your approach, try and get the best possible source audio and don’t rely too much on fixing things after recording. Choosing takes that are in tune with great timing will sound much better than chopping up a vocal and adding auto-tune.

Mixing

Now that you’re totally happy with the takes and performance it’s time to get the audio sounding amazing.

Cleaning Up The Mix

To start with, make sure you remove any unwanted audio like keyboard clicks before and after your takes. We can use a noise gate to only allow the audio above a set volume to be heard in our mix. We can set this up to make the channel silent when we don’t want to hear it, cutting out any background noise that might be present in the audio. This step sets the foundation for a clean and professional mix.

1. Set the threshold so all the audio you want to hear passes through the gate. The gate should close to remove unwanted noise.

2. Adjust the attack, hold, release and lookahead to work with your audio to make sure it still sounds musical.

EQ

Equalisers give us total control over the frequencies of our mix. They allow us to sculpt and shape our sounds to our desire. When you’re EQing, you’re doing one of two things; you’re cutting or boosting the frequencies. I almost always go for EQ first, as the audio that’s fed into a compressor will change how the compressor reacts. I might tweak the settings of the EQ and compressor simultaneously, but the EQ comes first in the chain.

The first thing I’ll do with my EQ is attend to any problems that I need to fix on that channel. By default, I’ll check the low end of channels that I don’t want to have low end. I’ll use a parametric EQ to create a steep low cut filter (35db to 90dB slope) to totally remove those humming low frequencies.

Once the mix is feeling decongested, I’ll look at any other problematic frequencies that have crept into the mix. I’m looking for disproportionate frequencies that make a channel sound unnatural when soloed. It doesn’t matter what the audio is (vocal, keys, guitar etc) my approach is the same. I want to make it sound like I’m hearing it being played live in front of me. You can use a tube or console emulation EQ to sweeten up the sound. It’s a good idea to use a reference track to help you get your track sounding like a professional release.

If possible, switch between 2 or more sets of monitors, speakers, earbuds or headphones to get a broad perspective of how you're shaping the sound. If your channels are beginning to sound ‘real’ through all playback systems you’re on the right track.

Compression

In a nutshell, compressors reduce the difference between the loudest and quietest parts of the audio its processing. They allow you to control, color and manipulate the dynamics of audio. They’re powerful tools but using the wrong setting can suck the punch out of your music.

For this mix, I automated a gain plugin on the vocal to try and keep the volume fairly constant throughout the performance. This means that when I use a compressor on the channel it doesn’t have to work too hard keeping the audio nice and dynamic.

The guitar compression thickens up the sound enhancing the body and weight. I’ve set the threshold to only catch the peaks of the audio, giving the audio a nice open and realistic feel whilst adding some control to the dynamics. For acoustic guitar and vocals, I would recommend using a ratio of 4:1 or less to keep things sounding dynamic. Keep the attack above 10 milliseconds and the release above 20 milliseconds to let your transients punch through the mix.

Stereo Spread

The stereo width can help bring clarity to your mix as well as adding interesting variation between sections. For this mix, I added a lot of width to the backing vocals during the chorus and mixed the vocals totally mono for the verse. This gives a lift to the chorus. The best way to get things sounding wide is to record two different recordings of the same part and pan them left and right. I did this for the acoustic guitar, the panning gets wider as the track progresses.

Here’s a trick you can use if you only have one take but want to make it sound wide, and its the same technique I used to make the backing vocals sound wide.

1. Duplicate the channel you want to sound wide. Pan one left and one right, by roughly the same amount. You’ll notice that the channels seem to sound like they’re coming from the middle still.

2. Now take a sample delay plugin and shift one of the channels back until the audio is sounding nice and wide.

FX

Reverb gives the listener a very obvious sensation of experiencing a sound in a defined space. You can make a channel feel very close and personal to the listener by using a short and subtle room reverb. Conversely, you can make a channel sound like it’s far off in the distance by adding a longer and more dramatic chamber reverb.

Too much reverb can make the mix feel washed out, so subtly is key. A great approach is to use a ‘send’ channel rather than inserting the reverb on the source audio itself. This way you keep the dry signal intact and you can dial in the perfect amount of wet reverb.

Producers tend to add more reverb when listening through monitors, and they use less reverb when listening through headphones. Jump between monitors and headphones to get a second perspective on the spaces you’re creating and tweak the settings to work in both listening environments.

Final Quality Control

For digital platforms, it’s recommended to leave some headroom in your audio so the track doesn’t clip when transcoded to lossy formats for digital delivery. Our plugin LEVELS has a precise true peak meter that can help you hit your YouTube target of -1.0 dBTP.

Youtube streams audio at around -13 LUFS (Loudness units full scale). If a track is uploaded louder than this, then YouTube turns it down. This is worth keeping in mind as loudness is achieved with compression, which reduces the dynamic range and punch of your music. So I recommend aiming to get your tracks loudness around -13 LUFS to maximize dynamic range whilst being at a comparable volume to other tracks on the platform. You can use LEVELS to monitor your loudness in real time within your session. Once you’re happy with your track you can use EXPOSE as a final quality control measure to ensure your track is ready for YouTube.

If you would like more details on getting an exact LUFS or decibels true peak measurement you can check out this post.

I hope this blog has given you some useful tips to use in your next mix. You can grab the free trials of some of the tools I used here. Thanks for reading.

]]>
https://www.masteringthemix.com/blogs/learn/decoding-the-mix-3-superstar-dj2018-05-29T03:00:00+01:002019-03-03T11:03:24+00:00Decoding The Mix #3 - Superstar DJTom Frampton
Calvin Harris has been releasing smash hits since 2007. It’s been reported that his DJ sets fetch him over $400k, making him the highest earning DJ’s on the planet. Harris is an incredibly well-rounded musician. He writes, produces and mixes his records as well as singing and playing many instruments. In this post, I’ll take a close look at his chart-topping song ‘One Kiss’ to see what we can learn from his approach to making music. We can then use these techniques to help guide our decisions to get great results in the studio.

Structure & Arrangement

‘One Kiss’ has a fairly relentless chord progression and driving energy throughout the track, as is typical with house influenced music. Many of the parts are repetitive from one section to the next. The tension is added using filtering, such as a low pass filter on the strings and piano building up to the final chorus.

The repetitive nature of the composition allows for a fairly complicated and fast-moving structure introducing new musical ideas every 8 bars. Notice how the first verse chords are played by the main synth and the second verse chords is played by the piano. This fairly uncommon approach adds an interesting change of timbre whilst helping verse 2 flow effortlessly from the drop.

One Kiss can be broken down into just 10 main parts. When we’re producing and mixing, often it’s better to use fewer sounds rather than stuffing the mix with a load of layers. Too many layers can make a mix sound congested and ends up confusing the listener.

Tonal Balance & Punch Analysis

I’ve compared the drop of ‘One Kiss’ to the drop of 3 other tracks in the same genre of ‘Commercial House’ to see how the tonal balance and punch compares.

Track 1: Cola - Camelphat (House)

Cola is a more typical house track where ‘One Kiss’ crosses over into Pop-House. The Trinity Display in REFERENCE tells us:

‘One Kiss’ has 1.4dB less perceived loudness in the low frequencies and has the same punch in that range as ‘Cola’.

From 200Hz to 2Khz One Kiss has more perceived loudness. This extra presence could be because ‘Cola’ is destined for club play and ‘One Kiss’ is both for club play and radio/streaming play.

The high frequencies are slightly more prominent in the mix of ‘One Kiss’.

From the low mids to the high frequencies, ‘One Kiss’ is slightly less punchy.

Track 2: Lullaby - Sigala ft. Paloma Faith (Pop-House)

Sigala’s Lullaby has had less commercial success than ‘One Kiss’ despite both artists having plenty of number ones in the past.

The low frequencies in ‘One Kiss’ are more prominent than in Lullaby. Sigala’s tracks are always extremely loud (reading 5.3LUFS during the chorus here) To get a track this loud you often have to reduce the low frequencies.

We can see in the middle band that ‘One Kiss’ measures -4.1SW. This shows that the stereo width of ‘One Kiss’ is much less wide than ‘Lullaby’. In fact it’s much less wide across the whole frequency spectrum.

The high end of both tracks is almost identical, with a difference of 0.1dB in perceived loudness. If you want a track with great clarity that isn’t too harsh, try both Lullaby and One Kiss as reference tracks.

Track 3: Solo - Clean Bandit (Pop)

Clean Bandit are more on the pop side of the genre spectrum. This comparison is interesting as even when I really zoom into the mix, adding 6 bands in REFERENCE, the tonal balance is extremely similar. The main difference is that ‘One Kiss’ is punchier in the mids. This is the more prominent kick poking through the mix of ‘One Kiss’.

Separation In The Mix

Harris has gone for a solid and centrally focussed mix for most of the elements in his mix. The intermittent introduction of wide strings, piano and brass open up the stereo spectrum to the listener.

Although Harris is known for his records getting a lot of radio play, they also get a lot of club play. Most clubs play audio in mono through their sound systems, so it’s important for club mixes to translate very well when summed to mono. This could be why Harris went for a centrally focussed mix for the majority fo the song.

The infographic shows some overlapping frequencies, such as the main synth and the vocals. Harris has minimized the conflict between these parts by adding more stereo width to the main synth ducking it out of the way of the mono vocal. You can hear it momentarily gets a few dB (decibels) quieter when the vocal comes in. This helps keep the vocal as the focus point. My preferred way of doing this is using a multi-band compressor and ducking the specific frequencies to reduce masking. An interesting way to do this could be to duck the frequencies just in the mid-channel and leave the stereo channels untouched. Below are instructions on how to set that up.

Low Frequencies Analysis

LEVELS shows that there is a little stereo width below 300Hz. Enough to create some nice separation between the kick and the bass, but not so much that phase cancelation occurs when the track is summed to mono. Keeping your low end in the green like ‘One Kiss’ will help your track sound solid both on radio and in a club.

Verse vs Drop Width

As we saw in the mix separationinfographic, Harris has gone for a very central mix. However, during the drop, he introduces the piano and brass positioned wider in the mix. This adds a lift to the drop and differentiates the sections.

Technical Analysis

YouTube

With over 40 million views in the first few weeks, it was important to make sure the YouTube release sounded as good as possible. From the screen grab below I can see that YouTube turned down the original file by 0.7dB to match it’s streaming target of roughly -13 LUFS (Loudness units full scale). This leads me to believe the original uploaded file was around -11.5 LUFS int and peaking at around -0.9dBTP.

Compared to a lot of other tracks submitted to YouTube, this isn’t a large reduction at all. For example ‘Lullaby - Sigala’ was reduced by 4.9dB. This leads me to believe that Harris and his team decided to submit ‘YouTube optimized’ audio with the music video.

As a result, the dynamic range is 2.7DR more punchy than the promo release for DJ’s to play in clubs.

Spotify

The results here are a little disappointing. EXPOSE reveals that Spotify had to turn the track down by roughly 5dB to normalize the track to it’s streaming target of around -14 LUFS int. That 5dB of headroom could have been used to introduce more punch into the record like they did for the YouTube version.

DJ City

DJ city provides audio to DJs for promotional club play, so it’s a very relevant delivery method to measure for this club track. This version is blisteringly loud, hitting a maximum of -5.1 short-term LUFS. Considering audible distortion can start to creep into a mix at -9 short-term LUFS. This version is also peaking at +1.88dBTP (decibels true peak). This might be passable on a club sound system with an excellent digital to analog converter but sounds quite crackly through my laptop speakers.

What Did We Learn:

The tonal balance of all 4 tracks used in the comparison were extremely similar. We can use these as reference tracks when we want to be sure our songs have a great tonal balance for commercial release.

]]>
https://www.masteringthemix.com/blogs/learn/how-to-master-music-to-get-an-exact-true-peak-and-lufs-reading2018-05-14T18:21:00+01:002018-05-18T22:44:33+01:00How To Master Music To Get An Exact True Peak and LUFS ReadingTom Frampton

This blog post will show you how to master your audio to an exact true peak and LUFS measurement. LUFS and True Peak affect each other and therefore should be addressed simultaneously.

There is a relationship between LU (loudness units) and dB (decibels) that gives an easy formula to help you hit your target levels with precision. To put it simply, 1 LU = 1 dB. So if your master has a reading of -12.3 LUFS int (integrated), and your target is -14 LUFS int, then you would need to reduce the gain of the master by the difference, so 1.7dB (-12.3 + -1.7 = -14). I would recommend reducing a plugin on your master chain that increases gain by this amount (1.7 in this case), such as the gain on your limiter. If your master was too quiet with a reading of -20.1 LUFS int, you would need to increase the gain by 6.1dB to hit -14LUFS int.

Note: The LUFS:dB relationship becomes less consistent as the loudness increases. A 1dB gain increase of a track measuring -7 LUFS int might give you an increase of around 0.5 LUFS Int. At this loudness, the limiter reacts less transparently to the audio.

Your true peak target can be achieved with a similar approach. Though rather than adjusting the gain, we’re going to adjust the output on the limiter. If your target is -1.0dBTP (as is the recommendation for streaming services) but your track is peaking at -0.23dBTP, then you would need to reduce your output by 0.77dB. This will give you your -1.0dBTP target but will also reduce your integrated LUFS by 0.77, so you will need to increase the gain to compensate.

I recommend using a metering plugin such as LEVELS to help you get your readings as close as possible to your targets, then run your bounce through EXPOSE to get the technical summary in seconds. From here you can make any adjustments you feel are needed.

It’s not always necessary to hit your targets with this kind of precision, but knowing the formula gives you a greater mastery over your music and may help you be more efficient in the studio.

]]>
https://www.masteringthemix.com/blogs/learn/hit-maker-mixing-and-mastering-engineers2018-04-24T01:00:00+01:002018-05-04T11:36:27+01:00Decoding The Mix #2: The ‘Hit Maker’ EngineersTom FramptonIn this blog, I’ll be analysing the track My My My by Troye Sivan. I chose this track as it was mixed by Serban Ghenea, who is currently without a doubt the most sought-after mixing engineer in the world. The track was mastered by Randy Merrill, one of the senior mastering engineers at the highly acclaimed 'Sterling Sound' studios. Serban and Randy are credited with some of the most successful releases in recent years.

By taking a really close look at how these ‘hit maker’ engineers approached the track we might uncover some ways we can improve our future productions.

Structure and Arrangement

This track is led by the vocals, bass, and kick, which are heard almost constantly throughout the track. These three elements are embellished by the secondary channels that come in and out of the track to add contrast between the sections and to keep the listener engaged. This simplicity helps keep the musical ideas simple to digest which adds to how memorable the track is.

The track’s arrangement can be broken down into 10 main elements (counting the backing vocals as one). Next time you’re sitting in front of a mix with 90 channels, ask yourself if simplifying the arrangement would give you a better result. Perfecting a single sound will often give much better result than adding layer upon layer. Too many layers will eventually give you a bloated sound and your mix will suffer.

Separation In The Mix

During the busiest sections of the track, there are about 9 elements playing simultaneously. Let’s look at how Ghenea has placed each element in the frequency spectrum and in the stereo field.

This track is mixed super wide! There’s a build-up of instruments from 200Hz to around 5kHz, so to give each element it’s own space, the stereo width has been utilised.

Low Frequencies Analysis

The kick and bass drive throughout the whole track at a pretty constant level. The only times they’re not in the arrangement is during the 4 bar intro and the 8 bar bridge. Both the kick and the bass have a short and punchy character and they almost glue together as one sound. The kick is slightly louder than the bass in the balance of the mix and has a slightly higher frequency range. The kick is pure mono whereas the bass is mixed slightly wider which gives some separation to the two channels. The kick and bass have no obvious volume automation from verse to chorus.

Mid Frequencies Analysis

The majority of the channels have energy in the mids, so the mixing engineer has had to use the whole stereo spectrum to give each element it’s own space. As you can see from the image below, LEVELS is showing that the correlation during the chorus is right on the brink of phase issues becoming a serious problem.

Listening in mono certainly changes the mix considerably. The widest element in the mix is the Vocal Synth which is what has the most obvious phase issue. It’s soloed during the intro so I was able to zone in and uncover how much phasing was going on. The image below shows what happens when you push sounds super wide. It sounds great in the context of the whole mix, but some frequencies disappear when heard in mono.

High Frequencies Analysis

The verse has quite tame high frequencies with the focus being the vocal, kick and bass. The hi-hats in the chorus open up the sound and give a dramatic lift to the high frequencies. The vocal synth has a slowly opening low-pass filter during the last 8 bars of each verse. It then opens out during the chorus giving a massive lift to the energy of the track in the high frequencies.

Verse Width vs Chorus Width

The verse seems to be deliberately mostly mono with a hint of wide elements here and there. This puts the listener in a frame of mind and sets a reference for the width of the song. The chorus is mixed extremely wide, giving an almost shocking juxtaposition to the verse. This contrast differentiates the sections and keeps the listener gripped to the song.

Effects and Depth

I find using headphones the best way to unpick the use of reverb and spatial effects in a mix.

The musical elements in this track have an amazing sense of space yet still sound really punchy, and if you listen to the reverb in the first verse you can see why. A gated reverb has been used on the rhythmic elements. A gated reverb opens up the space then cuts out before the next transient. This let’s you increase the reverb while keeping the overall mix clean and transparent.

The vocal reverb during the verse is quite subtle, giving a close sensation to the listener. This then opens up to a more prominent and longer reverb during the chorus which gives the impression of a growth in space.

Technical Analysis

Transients

Just by looking at the waveform we can see this isn’t your standard ‘loudness war’ master. The transients are very clear and the sounds haven’t been squashed by compression or limiting. The sections are very clear and distinguishable. The verse looks more sparse and slightly quieter than the chorus and the middle 8 shows a considerable change in loudness and instrumentation.

This kind of master is only possible when a lot of thought goes into carefully controlling the dynamics during the mix. You can get the same result from a balance of great tracking, purposeful automation, and transparent compression.

MFiT

My My My has the ‘Mastered for iTunes’ badge on the iTunes store, so I was surprised to find that my download was clipping. The idea with MFiT is that the mastering engineer leaves enough headroom to ensure no clipping will happen when the file is transcoded to AAC for delivery to the customer.

That being said, the master is a lot more dynamic than what you would expect to find from a loudness wars master. During the loudness wars, it was common to find tracks mastered to -6 LUFS integrated. At -10.5 LUFS integrated this track certainly hasn’t had the life compressed out of it.

The loudness range is 6.9 LU, which shows there is a considerable dynamic difference between the verses and choruses.

YouTube

This is where it gets interesting… In the ‘Stats For Nerds’ section of YouTube, you can see that the normalisation has only brought the track down by 0.1dB. If they had uploaded the same file as the MFiT bounce, YouTube would have reduced the loudness by about 2dB. That leads me to believe that they created a ‘YouTube optimised’ bounce.

We can see that the YouTube bounce has more punch (+0.7DR) and a lower loudness relative to the peak compared to the MFiT bounce. So they reduced the compression to create a more dynamic master whilst keeping the peaks much lower. Bottom line is that this is a great example of mastering for YouTube done right.

(The red at the beginning is EXPOSE catching the phase issues from the Vocal Synth).

Spotify

My My My Streams at -14.1 LUFS integrated which is almost bang on the average of -14 LUFS. The peak is a little lower on Spotify at -2.34dBTP (decibels true peak) which suggests that they may have used the same bounce for both Youtube and Spotify. Considering the track is already very dynamic, I would consider making just one ‘streaming’ bounce perfectly reasonable.

What Did We Learn:

When there is build up in the mids, use the stereo spectrum to achieve separation.

Now Its Your Turn!

Deconstructing a mix like this is a great way to make real improvements in your music production. One of the six cheat-sheets in my eBook ‘Never Get Stuck Again’ is a cheatsheet to help you decode any mix in minutes. I’ve filled in the cheat-sheet for My My My below.

]]>
https://www.masteringthemix.com/blogs/learn/precise-audio-engineering2018-04-10T01:00:00+01:002018-04-10T01:00:00+01:00Precise Audio EngineeringTom Frampton
Being deliberate and precise when working with plugins will increase your value as a music producer and audio engineer.

I’m totally against any ‘set and forget’ approaches when it comes to working with audio. Seemingly insignificant or minor mixing decisions can accumulate to result in a considerable improvement to the sound.

What Do I Mean By Precise Audio Engineering?

Plugins that change the sound of your audio are versatile and powerful tools. Many plugins have a monumental number of possible settings, so it’s easy to settle for a ‘ballpark’ sound. This ‘ballpark’ approach is holding your mixes back.

If sounding as good as possible is important to you, then fine tune every plugin parameter to work as well as possible with your audio. This is precise audio engineering.

How To Set The Perfect Plugin Parameters

A great approach to mixing is to work on a soloed channel, and then make further adjustments whilst listening to the channel in the context of the whole mix. Jumping back an forth like this gives you great perspective on the changes you are making to your audio.

We can get a clearer understanding of how we’re changing our mix by zooming in even closer.

A lot of plugins allow you to solo the parameter you’re working on. I find myself reaching for these plugins more than others as I can get precise results faster. When you’re able to zoom in like this you get a better understanding of exactly how many dB the EQ boost should be, the perfect ratio for your de-esser or the perfect crossover for your multi-band compressor.

Example 1: Precise EQ

Sonnox Dynamic EQ let’s you solo the band you’re working on so you can fine-tune the cut / boost, Q setting and dynamic processing.

Example 2: Precise De-Esser

UAD - Precision De-Esser allows you to solo the frequency you’re reducing so you can hear where the ‘esses’ are piercing through the mix.

Example 3: Precise Distortion

With Vertigo VSM-3 the amazing monitoring section let’s you solo either the mid or side channel, as well as the 2nd, 3rd or both distortion modules.

Example 4: Precise Limiting

Fabfilter’s Pro L2 solo feature let’s you listen to the peaks affected by the limiter. This second perspective will let you adjust the limiter settings to work perfectly with your audio.

Conclusion

Don’t settle for a ballpark sound, zoom in closer to your mix and tweak the settings to perfection to help your production reach their fullest sonic potential.

]]>
https://www.masteringthemix.com/blogs/learn/how-to-produce-a-powerful-drop-for-your-song2018-03-26T22:30:00+01:002019-01-31T12:19:18+00:00How To Produce A Powerful Drop For Your SongTom Frampton
Why doesn’t my chorus sound big enough? And how can I make my drop have more of an impact? If you’ve asked yourself these questions, then this post is for you. I’ve used this information to help artists get their drops sounding bigger than ever. Whether I’m working with stadium fillers or bedroom producers releasing their first track, the formula WORKS, and now I’m sharing it with you!

The Most Common Reason Drops Sound Weak

Over-compressing the drop can make it sound comparatively ‘smaller’ than your verse or build up. You can often visually see this issue in an audio file before you even hear it.

There are two things happening here. Firstly, a high ratio and fast attack on a compressor will suck the life out of the transients. Secondly, the compression is reducing the gain, which can literally make it sound quieter than your verse.

The Remedy: Make sure your chorus comes in noticeably louder than your verse or build up. This will give it a dramatic entrance and make it sound powerful. If you use compression on the elements within your drop, keep the ratio at or below 4:1 and make sure the attack time is long enough to keep your transients punchy.

You can use LEVELS to make sure your drop has a louder short term LUFS than your verse. 1-2LU difference is moderate, 3-6LU difference is significant, and a 7+LU difference is very obvious.

Fill The Speakers For A Huge Sound

In the majority of compositions, the chorus or drop is the most exciting, most memorable and most engaging part of the song. The reason choruses are so satisfying is because they resolve the tension from the build up and present the main melody in an energetic way.

When I’m producing or mixing a drop, I’ll try to ‘fill the speakers’ whilst keeping the mix decongested. I’ll sculpt full bodied and powerful sounds to create an encapsulating tonal balance. I’ll also utilise the whole stereo spectrum to make the drop feel as large as possible.

The art of filling the speakers and knowing where to stop is what makes the best mixing engineers so valuable. Make the sounds too full bodied and you’ll get a muddy mix. Place too many sounds wide and you’ll get phase issues and a lopsided result. When you’re working on your mix, be super precise with your adjustments. When you’re boosting or cutting with an EQ, be purposeful with the gain. Don’t just aimlessly EQ your audio, find the PERFECT amount to cut or boost. For example, if it’s a relatively small boost is it +2dB or +2.5dB? Which setting pushes the track towards a better final sound?

Contrast

For your drop or chorus to sound huge, your build-up or pre-chorus has to sound smaller in comparison. I would recommend making your build-up sound quieter in volume with less width to give your drop the maximum impact when it comes in. If both your build-up and your drop are loud and wide, then the drop will have no distinguishable impact.

You can also use the frequency range to create contrast. Keep your verse and build up free of instruments with a powerful low end, then introduce full range kick and bass in the drop. This can help you create an impact even if it’s just a bar of no low end before the drop. If you strongly feel that your arrangement and song needs constant bass through out the track, try automating an EQ to reduce the bass elements just a few decibels during the verse and build up. Then automate it back to 0dB for the drop. This subtle lift will add to the impact of your chorus.

Focus

We have around 20 thousand frequencies to play with when we’re making music, and they run out pretty quickly! There’s a finite amount of space for you to work with before your sounds start overlapping. Try to minimise overlapping to get your audio sounding powerful. If frequencies do overlap, use the stereo width to add separation.

Too many contrasting melodic ideas can also detract from the power of your drop. Make the melodies of your chorus digestible to make it as memorable as possible for your listener.

Summary

The drop should be louder than the pre-chours and verse.

The drop should fill the speakers utilising the full frequency and stereo spectrum.

The build-up should sound quieter and less wide compared to your drop.

Minimise overlapping during the drop to get your audio sounding powerful.

]]>
https://www.masteringthemix.com/blogs/learn/19-timesavers-to-streamline-your-music-production2018-03-11T23:00:00+00:002018-03-12T13:32:17+00:0019 Timesavers to Streamline Your Music ProductionTom Frampton
If you want to make faster progress when you’re in the studio, then this post is for you. The music production business operates at an incredibly fast pace. You’ll need to implement as many of these strategies as possible to keep up and maintain high standards.

1. Start with a Template

You can hit the ground running with every session and get fast results by using a template. The last thing you want is to be spending the first 15 minutes flicking through synth sound banks or choosing a drum loop. Why not have a ‘Starting Point’ template with some great samples already loaded, maybe even a drum groove and a few of your favorite synths. This way you can open a project and get straight to writing music. You can change the sounds later to suit the notes you’ve already written.

2. Learn / Create DAW Shortcuts

Reading your DAW manual might sound like a super boring way to spend an afternoon. But trust me, this time investment will pay dividends. You’ll learn awesome new tricks and uncover how powerful your DAW really is. You’ll impress your producer buddies with all the cool new tricks you’ve learned and get FAST results when you’re making music.

3. Save Your Defaults for Your Favourite Plugins

When you open a plugin you want to get to work as quickly as possible. Overwrite the factory default with a starting point is right for YOU.

4. Save Channel Strips

There will be times where you spend hours creating an incredible channel strip. Get the most out of your efforts by saving the channel strip and refusing it in the future. You might need to tweak it to work with your new material but you won’t be starting from scratch.

5. Work Backwards

Many producers work ‘chronologically’ and forget the bigger picture of the mix. Picture this scenario… Let’s say you work on the lead vocal. You get it sounding perfect by itself. You move on to the backing vocals and get them sounding perfect when soloed. Now you want to get all the vocals sitting in the mix, but the Lead and BV’s don’t fuse together well and feel foreign within the context of the mix. You now have to rework the vocals and spend more time to get a good result.

The timesaving remedy to this scenario… Start by listening to the whole mix and focus on how the vocals sit in the mix. You can now EQ, compress and make any other changes to get the Lead sounding great in the context of the whole mix. Once that’s done you can work on the BVs to compliment how the Lead sounds in the context of the whole mix. This way you only have to do things once.

6. Commit to Sounds

Endless tweaking will lead to hundreds of unfinished tracks piling up. When you’re confident you’ve got a specific channel sounding great, print it. If you’ve ‘worked backwards’ as stated in the previous tip, then you know how the channel works in the context of your whole mix anyway, so you shouldn’t have any nasty surprises later. Just make sure the source material is awesome enough to commit to keeping it.

7. Reference Often

Keep your ears calibrated on how a great mix sounds. On your next mix or master try listening to at least 6 seconds of a reference track every minute and watch how fast you achieve your results. Check out the 15 day free trial of our referencing plugin to save even more time.

8. Switch Monitors Often

Change the playback source to get another perspective on how your mix is sounding, Even if you only jump between your monitors and headphones/laptop speakers. I’m constantly reaching for my monitor controller where I can immediately switch between 3 sets of monitors with different ranges. I’ll often uncover problems with my mix when I switch the playback source, it’s the most incredible eye-opener.

9. Automate with a Midi Controller

Writing in automation with a mouse or trackpad is a slow and inefficient process. I prefer to do one pass using the channel fader on my midi keyboard then use my trackpad to make adjustments. This is about 80% faster than if I simply used my trackpad.

10. Set a Timer (Pomodoro Technique)

The Pomodoro Technique is a time management hack where you break tasks down into 23-25 minutes intervals with a 3-5 minute break after each session. After 4 sessions take at least a 15-minute break.

This keeps you totally focused on the task at hand and gives you a target time in which to finish it. We humans tend to make a task take as long as the time we have to complete it. If we say we have 2 days to finish the vocals, it’ll take 2 days. If we say 25 minutes, well it might just take that long!

11. Autosave / Save Often

Every second of effort counts towards your final goal. We all know the stomach-churning feeling of losing hours of work when a project crashes. I haven’t lost a project since Logic X launched its autosave feature. Make sure you’ve enabled autosave on your own DAW.

12. Use Clear File Names

13. Back Up Your Backups

Your files, projects, samples and everything you need to make music are essential to your music career. Ask yourself: ”If I spilled a coffee on my computer today, how long would it take me to be back up and running?” Accidents happen, be prepared so you can bounce back quickly when they happen.

14. Set Limits on Number of Channels

Endlessly adding layers and channels will result in a cluttered and congested mix. AND it will take more time to work on those extra channels. Keep your productions lean and focus on what your track really needs. Some of the best mixes I’ve worked on have had less than 30 channels in total. The most difficult have had over 80 channels. With that in mind, consolidate channels when you can. For, example if you have 5 different claps all panned differently, bounce them down into one manageable stem.

15. Shut Out Distractions

If you’re using the Pomodoro Technique as described earlier, you can switch off your wifi and phone for 23-25 minutes, then use the break to check any social media updates or emails if you feel it’s necessary. You can also ask your partner or housemates to not disturb you for a specific amount of time, allowing you to solely focus on making music efficiently. Better to spend quality time with your partner and focused time in the studio than only being semi-present in both scenarios.

16. Re-Organise

If you find a new and improved workflow, integrate it. If your studio is getting messy, clean it. If your file management is becoming cluttered, move what you don’t need onto a hard drive. Even the best systems need to be managed and reorganized from time to time.

17. Learn From Past Time-Consuming Mistakes

Don’t get stung by the same time-consuming mistake more than once. When you run into something that’s held you back, put in a new system to resolve the issue. Invest minutes now to save hours in the long run.

18. Double Check Your Files Before You Send

How many times have you shared the wrong file? It’s so embarrassing when the person you’re collaborating with has pointed out an obvious mistake in the file. This wastes everyone’s time. My final quality control step before I send a track is to run it through EXPOSE. With EXPOSE I can instantly see if the track is too loud, not loud enough, over-compressed, clipping, phasing or unbalanced between the left and right speaker.

19. Know The Next Step To Take

When you have a roadmap in your mind of the steps needed to complete your track, you can keep moving forward towards that goal. Without a plan, you’re aimlessly shooting for an unspecified target. If you want a resource to help you with this, I put together my most valuable information into a 124 page eBook called ‘Never Get Stuck Again’. It will help you turn your musical idea into a quality finished track.

]]>
https://www.masteringthemix.com/blogs/learn/decoding-the-mix-of-the-most-streamed-song-of-20172018-01-09T04:00:00+00:002018-01-09T20:17:25+00:00Decoding The Mix Of The Most Streamed Song Of 2017Tom Frampton
In this blog I’ll be analysing the mix of the most streamed song of 2017; Shape Of You by Ed Sheeran. This track boasts over 3 Billion plays on Youtube and holds the record for most weeks in the Billboard Top 10.

By closely analysing this massive hit hopefully we can come away with some useful ideas to infuse into our own productions.

Keeping The Listener Engaged

As you can see from the arrangement and structure infographic below, there are three main elements playing throughout the track; Vocals, Guitar Percussion Loop and Plucked Chords. Other elements come in and out to add contrast between the sections, but they are secondary. This focus on simplicity keeps the listener engaged to the music without too much effort on their part.

What can we learn from this? When we’re producing and mixing, often it’s better to use less sounds rather than stuffing the mix with a load of layers. Too many layers can make a mix sound congested and ends up confusing the listener.

Separation in The Mix

During the busiest sections of the track, there are about 11 elements playing simultaneously. Let’s look at how the mixer has placed each element in the frequency spectrum and in the stereo field.

As in any mix, the frequencies of the different elements overlap, but everything more or less has it’s own space in the mix. Let’s unpack it further.

Low Frequencies analysis

The kick and bass only come in simultaneously during the drops and the final chorus. The kick dominates the low end and punches through at a slightly higher frequency than the bass. The bass has a rounded low end and soft harmonics. The bass sits subtly behind the kick in the mix whilst proving a solid foundation of the key.

The kick is short and punchy whereas the bass has a longer tail. These contrasting characteristics means they aren’t competing for space and attention so they glue together well.

I can hear that the kick is mono whereas the bass is a little wider. The low pass filter in the stereo spread section of LEVELS helps me confirm this…

If the low frequencies were pure mono they would be dead centre but there is a little bit of width in the bass. Not enough to cause any problems, just enough to add some separation between the kick and the bass.

Middle Frequencies analysis

This is where the bulk of the action is happening. The three driving elements of the track (Lead Vocals, Pluck Chords and Guitar Percussion) all sit in the mids. Notice the Pluck Chords and the Guitar Percussion don’t compete for the same frequencies and can therefore both be mixed centrally in the stereo field. The vocal sits on top of the mid frequencies in terms of volume.

The mixing engineer has used the stereo spectrum to push overlapping frequencies wider so they don’t compete for space. The strummed guitar, BVs and Choir are pushed wide to give space and attention to the Lead Vocal and Plucked Chords (which drive the track).

As you can see below, the instruments and vocal in the verse is placed very mono.

Whereas during the chorus the mixing engineer has really opened up the stereo field. This contrast between the sections gives a really interesting lift to the chorus.

High Frequencies analysis

Many pop tracks have a glistening top end with hi hats and FX. Shape Of You doesn’t! Above 10kHz you’ll only find natural frequencies and harmonics giving the track a warm and organic vibe.

Effects and Depth

If you listen to the track with headphones you’ll get the most obvious picture of the reverb used.

The reverb used throughout the track is quite subtle which gives a really close feel to the music. I can hear that the instrumental elements have a short reverb (around 1 second) and the vocal reverb is slightly longer (2-3 seconds). The vocal reverb is mixed in slightly quieter than the reverb of the Plucked Chords making it seem closer and larger. A great approach to get an upfront vocal.

Some producers like to use the same reverb settings for all their tracks and change the wet/dry amount for variation and separation. In this track I believe the mixer used slightly different settings for each element to create different depths. For example the Synth Atmosphere during the chorus has the most reverb and sits quite far back in the mix.

The tone of each reverb is very consistent across all channels. There aren’t any particularly bright or resonance reverbs that stick out of the mix. This gives the track a very cohesive sound.

Technical Analysis

As ever is the case with most major label releases, I was disappointed with my findings regarding the technical details of this track.

MFiT

I bought this song from the MFiT section of iTunes, only to find it definitely wasn’t mastered for iTunes. The peak level was +0.39dB (decibels) and +1.21dBTP (decibels true peak). So when it plays through laptop speakers or earbuds it’s distorting… MFiT is supposed to account for the fact that the WAV uploaded will get transcoded to AAC (Advanced Audio Codec) but whoever mastered this track must have not checked correctly. I always check my masters with EXPOSE and select the MFiT preset to ensure I don’t run into these problems.

Spotify

Shape Of You is one of the louder tracks on Spotify! Usually tracks stream between -12 to -16LUFS integrated, but Sheerans track streams at a slightly louder -10.7 LUFS. (It’s just speculation but Spotify might favourably increase the volume of some major label releases. A conclusive answer would require further investigation).

Spotify’s normalisation algorithm turned the track down by 2.13dB. They certainly could have used that headroom to create a punchier sounding master for Spotify.

YouTube

YouTube’s normalisation algorithm reduced the volume of Shape Of You by 4.28dB, making the track stream at -13.5 LUFS. Considering this track was heard over 3 Billion times through Youtube, I think it would have been beneficial to use that 4dB of headroom to create a punchier and more open sound.

Bottom Line

The songwriting and arrangement was good enough to make it the most streamed song of 2017, but they could have delivered a better quality and more dynamic final master to their listeners.

What Made This Track Stand Out?

Shape Of You is a prime example of the fundamentals done right. Great songwriting, great structure and arrangement and a super solid mix.

When writing this song, It wasn’t originally going to be released by Ed. Checkout the video below where the songwriters discuss their approach to the song. The lesson I took from the video is: Due open to working with new styles and ideas. Breaking out of your production mould can make hits. Never stop exploring and learning new techniques.

]]>
https://www.masteringthemix.com/blogs/learn/colors-dimension-and-the-dynamic-of-your-mix2017-12-13T07:00:00+00:002017-12-13T07:00:00+00:00Colors, Dimension and the Dynamic of your MixTom FramptonExcerpt from one of my favourite mixing books: YOUR MIX SUCKS by Mixed by Marc Mozart.

Let’s look back on the what we have tried to achieve so far in this book:

• Improving room acoustics and listening experience that allows for an „objective as possible“ judgement of our mix

• Creating a well organized mix session in our DAW that assures we can apply our sonic ideas quickly

• Building a strong foundation for our mix by balancing the low end, and saveguard our creativity by always retaining plenty of headroom • giving special attention to the lead vocals, assure they have a round tone, cut through the mix, and have plenty of attitude

• Creating continuity throughout the mix • using parallel compression to add weight and impact on our most important elements in the mix

We now have a very solid foundation, but the mix still sounds one dimensional and static at this point - which is what we’re working on in this chapter to finalize our mix!

EQ-ing

Here’s something you need to get your head around. We’re mixing MUSIC. Try to see the following in every element of your mix:

• Fundamental note/tone

• Harmonics on top of that (often in form of a triad)

• A noise component added to that, mainly in the high frequencies.

The frequencies in your mix should form a smooth texture where the different instruments add up to a rich spectrum of colours. Don’t take this too literally, but think in musical terms when EQing, and keep that in mind while we go through the tools available to create this. Boost frequencies that build a triad, spread wider in the low register, and you can go more narrow in the higher mids. EQ-ing treble is a matter of asking if you want highs on this instrument or not? If yes, how much until they hurt your ears?“ The chart below can help, but always keep in mind, don’t take this too literal. PIANO KEYS TO FREQUENCY CHART Before we have a look at the important types of EQs, and where to use them, know that you will need to use a combination of these to cover all your EQ-needs in a mix. If you are not yet familiar with those classifications, take some time to explore the possibilities of these on different sources. In the context of setting final levels in the mix, which we get to at the end of this chapter, EQs can be used in a very basic way. When you level for example a piano or guitar in the mix, this is as simple as having the upper range of the instruments, including the noise component (piano hammer noises, guitar picking noises) sit right in the mix first, and then use a broad EQ between 200 - 400Hz to adjust the lower range of the instrument by boosting or cutting.

Classic EQ-types

„Pultec“-type EQs

Probably the EQs I personally use the most in my mixes - the hardware-versions of these are tube-compressors that come in two basic types: 1. the EQP-1 „Program Equalizer“ can boost and reduce bass at the same time (in steps from 20 to 100Hz), and has similar controls plus a bandwidth-parameter for treble (switchable boost for 3, 4, 5, 8, 10, 12, 16kHz, Attenuation/Reduction for 5, 10, 20kHz). The MEQ-1 „Midrange Equalizer“ is an EQ for mids - it has a boost (here called „Peak“) for selectable lower mids (200, 300, 500, 700, 1000Hz), a selectable „DIP“ (reduction) for 200, 300, 500, 700Hz, 1, 1.5, 2, 3, 4, 5, 7kHz and another boost for high mids (switchable between 1.5, 2, 3, 4, 5kHz). Pultec-type EQs are great for shaping tones in a very natural way. It is difficult to get a bad-sounding result out of them. Even when frequencies are fully boosted, the boost still has a smooth and natural character about it. The reason for this are the Pultecs broad EQ curves. Even when you boost below 100Hz, the boost reaches up to 700Hz. While these EQs have their own character, you can learn from them even if you don’t own one. Simply try out broader curves (smaller Q-factors) with the EQ you have at hand. The EQ-part of the Pultec consists of „passive“ electronics that reduces gain internally, and the tubes are used for a 2 stage line amplifier to make up for the gain lost in the passive EQ-circuit. There are a couple known variations of these from known manufacturers, and an almost endless number of plug-in versions. While the original hardware-versions are amongst the most expensive EQs you can buy for money, plug-ins are of course a way to use these on pretty much every source in your mix. Note that Pultecs add a very desirable and subtle tube saturation to your signal even when the EQ is set flat.

PULTEC EQs – APPLICATION EXAMPLES

Pultec MEQ 5 – WARMTH ON A VOCAL The Pultec MEQ 5 is usually my first EQ in the vocal chain, using a broad boost between 200 – 500 Hz, but you can simulate these (broad) curves with many stock EQs that come with your DAW. I don’t ever go lower than 200 Hz, and occasionally up to 700Hz. The effect we want to get here is that the vocal gets more weight and warmth in the mix. If the vocal is well tracked, it comes with a lot of that quality in the recording and you may not need to do anything here. This is why people use Neve 1073s and various tube-based equipment (from Tube Mics, Tube Mic-Pres to Tube Compressors) during tracking. However, a lot of modern vocal recordings sound rather thin, and a nice boost in the low mids can fix that. If you like the character you’re adding with the boost, you can even do a little bit too much of it. You can counterbalance it later in the chain, for example by using a gentle compressor like a Fairchild or Summit TLA-100A. In case the vocal already sounds overly „muddy“ or „boomy“, add a Linear Phase EQ at the beginning of the plug-in chain, locate and remove the frequencies that cause this effect. Watch the interdependence of that – once you’ve removed resonances, you have more leeway again to use that broad Pultec-boost again. Pultec EQP-1a – FINAL EQ ON A VOCAL CHAIN A Pultec EQP-1a as a final EQ can round off treble and bass. I like a Pultec EQP-1a here, to boost at 20Hz, and Attenuate at 20kHz. Your vocals will sound more analogue when you roll off the top end – I always do that at least slightly, sometimes a lot. Both the boost at 20Hz, and the attenuation at 20kHz should not affect the essence of the tone you have created. The boosts can add little bit of weight, and the cut removes top end energy that only hurts at loud volumes. Pultec MEQ 5 – WARMTH ON THE STEREO BUS Similar to the vocal chain, the MEQ 5 is boosting when „warmth“ is needed. Test if our mix bus lacks anything between 200Hz and 700Hz by switching through the frequencies. Don’t boost anything just because you can. Try the same with the upper band as well – a little boost between 1.5k and 7k can add some energy. On all accounts, I’m talking about a +2 or +3dB boost here at best – which is still a very subtle amount on the Pultec. Pultec EQP-1a – FINAL TONE CONTROL ON THE STEREO BUS Again the Pultec EQP-1a (never confuse it with the MEQ used above) does a subtle boost here, usually 2dB at 20Hz, and I attenuate treble by 2dB at 20kHz. Another very „esoteric“ setting, the Pultecs on my mix bus are mainly used to add an analogue vibe.

Classic console EQs (SSL, Neve, API)

These are the EQs found on the most popular large format recording consoles from the 1970s until today. You don’t need to own a recording console as all of these are available as hardware from the original manufacturers, for example in the popular 500-series format. Console EQs were designed to be able to shape all kinds of signals. They usually have a shelf EQ for the lows and highs, and 1 or 2 bands or fully parametric EQs for the mids. These can often reach as high or low as the low and high shelf EQs. Today, most people learn about and use them in the form of plug-ins, some of them developed with SSL, Neve or API. I might be a bit simplifying here - but you mainly use these when you want to boost or shape a sound more narrow, or more agressively than what can be achieved with the Pultec-type EQs.

CLASSIC CONSOLE EQs – APPLICATION EXAMPLES

SSL EQ FOR PRESENCE IN A LEAD VOCAL In a dense mix, we need to create frequencies that make the vocal cut through the rest of the instruments. We can go to extremes here, but before you start playing with the mid boost, set up a compressor that follows it right away. It’s needed to tame the mid boosts as they can get very harsh. Often, less or no boosting in the mids is needed in less populated parts of the song, but when the vocals are up against a wall of sound, you will need a musically composed texture of „cut through“ frequencies there. This is more complicated to get right, compared to creating warmth. Start with an SSL-type EQ and boost the high shelf at 8k +10dB, then pull back again to 0dB and find a great setting for it somewhere in the middle. Try switching between BELL and SHELF characteristics (BELL will just boost around the set frequency, while shelf also includes all frequencies above. If 8k is boosting sibilance too much, go a tiny bit lower. Continue by using the HMF band to boost at 4k. And the LMF to boost 2k. Move these around until you find a good balance – but keep in mind, not boosting anything is always an option. The goal is to create a cluster of mid-boosts that appears as one colourful and musical texture of mid-boosts between 1 and 8k. Chris Lord-Alge is a master of this technique, and you can learn a lot from him by studying the Chris Lord-Alge presets for the Waves SSL Channel. Check specifically the “Rock Vocals” preset. You will probably be able to achieve good results with the stock EQs of your DAW, but there is a reason why SSL EQs are famous for their musicality in the mids. API, Neve works as well. Not a job for a Pultec.

Linear Phase EQs

These are digital EQs, and they were first introduced as super expensive digital outboard boxes for mastering engineers, who have used them for many years. Like everything expensive and digital, it is now available in plug-in form, and for example Logic Pro has a very good linear phase EQ that comes with the software. There is a ton of technical info on linear phase EQs on the web, none of which will help you improve your mix. One thing they all have in common is adding a significant latency to your signal that needs to be compensated for by your DAW. This is a problem when using it on live instruments, but not in mixing. As shown in the chapter on parallel compression, just make sure your plugin delay compensation is switched on across all types of audio tracks, and you’ll be fine using linear phase EQs. The reason for the added latency is that instead of „post ringing“ which we see in traditional EQs, the linear phase EQs adds „pre ringing“, which in turn keeps the phase response linear. All you need to know is that the linear phase behaviour makes these sound more neutral and less drastic. They don’t add harmonics and resonances - their effect is totally isolated to the frequency range you have selected. They can be used for boosting and attenuating, both broad and narrow. LINEAR PHASE EQs – APPLICATION EXAMPLESLINEAR PHASE EQs – USING NOTCH-FILTERS TO SURGICAL REMOVE “STUFF” & BOOSTING SPECIFIC FREQUENCIES WITHOUT AFFECTING ANYTHING ELSE Linear Phase EQs are unbeaten to do “surgical” operations on your audio material. If you need to remove a specific frequency, for example a room resonance in a live recording, you can set a very high Q-factor and create a notch filter at this frequency that will not affect anything else. All other EQs listed here operate much broader, even when set to a high Q (note: high Q-factor = narrow EQ = notch filter; low Q-factor = broader EQ-curves). This can also be useful so discretely boost specific frequencies. Very useful when targeting the exact fundamental root note or 1st harmonic on a kick drum. Again, set a very high Q, and a linear phase EQ will boost just that narrow band. Analogue EQs are known to create harmonics above the boosted frequency, and will also start to self-resonate at high gain. Which is what we sometimes want – but not always. LINEAR PHASE EQs – FINAL TONE CONTROL ON THE MIX BUS I also use a linear phase EQ as my final control for the overall frequency curve of the mix. I usually add a very broad and subtle boost on the bass, add or remove mids broadly by a maximum of +- 1dB, and check if there’s room for a bit more “sparkle” around 12k. Again, I’ve made all musical EQing before that so I want an EQ here that does not create any harmonics. After all, it’s the final stage of my stereo bus.

Filters

To list filters here is somehow redundant. Filters always come in the package of most EQs, except the Pultec-type. You use them to remove frequencies below a certain frequency (HPF = High Pass Filter = high frequencies are „allowed“ to pass) or above (LPF = Low Pass Filter = low frequencies are passing). The most popular application is a HPF on vocals, to remove low "rumbling", commonly frequencies below 60 - 120 Hz.

FILTERS – APPLICATION EXAMPLES

FILTERS – REMOVING LOW-END RUMBLE ON A VOCAL RECORDING This is of course a widely known technique – many people use a high pass filter (HPF) set at around 70Hz in their vocal tracking-chain, to remove low-end rumble that is caused by the environment, but doesn’t contain any frequencies from the recorded source. FILTERS – FOCUSSING A SOUND You can use a combination of HPF and LPF to focus any sound to a specific frequency range. Sounds tend to “sit” better in the mix when you limit their range. Just as an example, try reducing anything above 10k on a synth-bass. When solo’d it might sound like you take something away from the sound, but in the context of an entire mix there are other instruments that need that space above 10k. Same goes for guitars, pianos, synth-stabs, etc. – by limiting the frequency range an instrument is easier to identify in the mix which in turn contributes to the overall three-dimensionality of your mix. On many, if not most of my DAW-channels, I’m using ALL of these EQs at different stages of the plug-in chain.

A different look at compressors

We all have a basic idea of what a compressor does and how to use it, right? I’ve googled „what does a compressor do?“ and the „top“-results are pretty much all similar but still wrong. Something along these lines: „Compression controls the dynamics of a sound, it raises low volumes, and lowers high volumes“ This sub-chapter is about compressors as a tool to shape the tone of a signal via adding harmonics. There might still some level correction involved, but as pointed out in Chapter 7, correcting drops or peaks of level at the source is preferable to using a compressor for that. Personally, I think of different models of compressors in terms of „how they feel“. The choice becomes intuitive, as a compressor imparts a distinct characteristic on a sound, pushes it into a sonic direction. It took me years of practise to develop that feel for certain types of gear. It’s more difficult to develop when you use only plug-ins, but you can still get to similar results with both hardware and plug-ins. The original hardware counterparts are different in that they show a lot more color and distortion when you drive them to extremes. Those extremes helped me to learn the characteristics. Since that won’t help you - unless you have access to a studio with a large analogue outboard collection - this post is taking an analytic look into popular compressor plug-ins and their characteristics.

TEST SETUP & PROCEDURE

Let’s run some popular compressor plug-ins through a test setup and procedure, then look at what the results tell us! The Test Oscillator in Logic Pro X feeds a compressor with a test-tone • the test tone is a sine-wave (as you know, a sine-wave has no added harmonics). • we cycle through 55 Hz, 110 Hz, 220 Hz, 440 Hz tones • then a sweep from 20 Hz to 20.000 Hz • ending the cycle with a 100 Hz tone we cycle through this 3 times, with rising levels1st Cycle • Oscillator hits Compressor with - 18 dB of level • Compressor Threshold is set JUST BEFORE compression • for the compressor NOT to compress (unity gain) 2nd Cycle • Oscillator hits Compressor with - 12dB of level (6 dB more than on the previous cycle) • Compressor settings stay the same, but of course compression now kicks in!! 3rd Cycle • Oscillator hits Compressor with – 2 dB of level (another 10dB added on top of the previous cycle) • Compressors settings remain the same, but now hitting compression quite hard! The upper track you can see in the videos is the automation curve for the Test Oscillator’s frequencies and levels, the lower track shows a huge analyzer plug-in after the output of the compressor (using Logic Pro X’s Channel EQ), and that shows as the frequency spectrum in realtime. BTW: – 18 dB in your software is a GREAT average level for your recordings and signals in ALL situations. It assures clean and pristine sound and compatibility with all plug-ins. Summary: on the first cycle, the compressor doesn’t actually change the level of the signal, on the 2nd cycle there is some compression, and on the 3rd cycle: a lot. Every cycle ends with a 100 Hz tone – that makes it easy to read the added harmonics on the analyzer. 2nd harmonic = 200 Hz 3rd harmonic = 300 Hz 4th harmonic = 400 Hz 5th harmonic = 500 Hz n th harmonic = 100 Hz x n Just for reference, here’s what the test procedure looks like with NO COMPRESSOR inserted in the signal path. As you can see, the analyzer just shows basic sine-tones, with no added harmonics. Music theory and physics calls this is the 1st “harmonic” – but don’t be confused, that is the term for the original frequency of the sine-tone.

FAIRCHILD 670 COMPRESSOR (1959) – THE ROYAL HARMONICS ORCHESTRA

To give you a proper contrast – here’s what this looks like with a plug-in clone of the legendary Fairchild 670 compressor, as some of you might know, the most expensive and sought after vintage tube compressor on the market. I bet you that the designers of this plug-in looked at a spectrum like that forever, and did endless coding and testing until the plug-in matched the original hardware closely. You can already see some harmonics even when the compressor doesn’t compress, but they really kick in the more you compress. Note: the lower the sine-tone, the more harmonics show up – I can count 13 added harmonics in the 3rd cycle on top of the 50 Hz sine tone. When the oscillator reaches 2000 Hz, the Fairchild doesn’t add more harmonics on top, at least not visible anymore in the spectrum. If you look at the rich harmonics added by the Fairchild, you start understanding how it gives a dull bass-sound or a 808 subsonic kick a richer frequency spectrum. This is very useful as it helps low-end to subsonic sounds to translate better on smaller systems (think laptop, tablet, smartphone, kitchen radio). At the same time, a Fairchild might not be the ideal compressor to purely control volume, because the more you compress, the more it changes the sound of the source. This is not typically what you need if your goal is to level something that is dynamically uneven. On the contrary, you want to make sure that your source is already in control dynamically BEFORE you even hit the Fairchild. There are other ways to achieve a consistent loudness in a performance. Look at waveforms of your recording and just bring the lower parts up in level, reduce loud sections, automate, and a lot of times, a consistent level is what a great performer brings to the table! If your signal is well-levelled and even, you can alter its tone by the amount of drive the signal into compression. I’m almost ready to go into detail on parallel compression at this point – imagine a setup where you’re bring the same low 808 Kick into two mixer channels. The first channel kept unprocessed, the second channel pushed hard into a tube compressor like the Fairchild – the added tube channel will make the 808 come through on smaller systems and adds a nice texture to a sound thats pretty close to a sine-tone. On the parallel channel you can even cut off the low end and just add the harmonics (of course cut off POST compression) – more about that in Part 2 of this article. Essentially, what the added harmonics do is adding frequencies to the original sound that weren’t there before.

PRE COMPRESSION:

Surgical EQ removes unwanted frequencies It’s very important to put an EQ BEFORE the compressor. Use this EQ to remove unwanted frequencies. Typical example would be a High Pass Filter that removes rumbling impact noise on vocal recordings. Imagine how the Compressor would add harmonics to a rumbling noise at 30 Hz and really bring it out – you don’t want that. Same goes for unpleasant room resonances – find them using a narrow EQ boost then set a small notch to remove them. This so-called „surgical EQing“ works best with „Linear Phase EQs“ – many plug-in manufacturers make them, they don’t add coloring resonances. I like the one that comes with Logic Pro X a lot.

POST COMPRESSION:

EQ for color and tonal balance Going back to the example of a low 808 subkick, which is a sound that is very similar to the sine-tone used in our test, there would be no point in EQing a pure sine-tone, right? You can’t add frequencies that are not there – in contrast to a tube compressor, an EQ does NOT generate any frequencies, it can only adjust the tonal balance of the given frequency-content. With that, we have once again turned common audio-knowledge upside down: Compressors can add color to any frequency EQs are static, all they do is adjust the tonal volume That rule is of course not totally holding its own once we look at a few more types of compressors. What we’re interpreting in this article focusses on frequencies and harmonics, which is just one aspect of compressors. The other one is the actual ability of a compressor to level, limit or „grab“ a signal, attributes that all refer to the volume of the sound.

UNIVERSAL AUDIO/UREI LA-3A (1969)

The Waves CLA-3A is a plug-in clone of the original Universal Audio LA-3A Compressor/Limiter. In contrast to the Fairchild, it’s a lot better suited for levelling a signal. The LA-3A adds only one harmonic (the 3rd one). The Fairchild and the LA-3A can co-exist in a signal chain. Use the LA-3A to even out levels, then hit the Fairchild. The LA-3A is typically used as leveller for bass, guitars and even vocals. Less suitable for percussive sounds – it’s not following fast enough to control a drum sound.

TELETRONIX LA-2A (1965)

The Waves CLA-2A is a clone of the Teletronix LA-2A. The design is a few years older than that of the LA-3A. It adds more harmonics than the LA-3A, but still a lot less than the Fairchild. Typically used to control bass, backing vocals or laid-back lead vocals. A fairly slow and laid-back tube compressor.

UNIVERSAL AUDIO 1176 REV. A BLUE STRIPE (1967)

The Waves CLA-76 is a clone of the Universal Audio/UREI 1176. Various revisions were made, the „blue stripe“-version being the first one ever created. Nearly every plug-in manufacturer offers a clone of the 1176. I like the ones made by Waves, and they got their name because Waves developed them with Mr. CLA aka Chris Lord-Alge. The 1176 displays extremely rich harmonics. In comparison to the Fairchild 670, it sounds a lot more agressive and levels superfast. That makes the 1176 very flexible – it can be used on almost any source. Like the Fairchild, the 1176 is a true studio classic and it would be worth writing a dedicated chapter about it. If you have a bunch of those in your rack (or a great plug-in clone in your collection), you could mix an entire project exclusively with those. One of the things it works very well for is making vocals agressive and bring them upfront. You can drive it hard into a lot of compression, and as a result will see a lot of harmonics. It would not be the only compressor in the chain, I’m usually running another compressor for levelling before the 1176.

SOLID STATE LOGIC SSL E/G-SERIES BUS COMPRESSOR (1977)

This is of course one of the most famous compressors ever build. Waves teamed up with SSL to create one of the first true emulations of an original hardware, and this plug-in (as part of the Waves SSL-bundle) is now a classic, just like the original SSL 4000E and G-Series consoles. It does – of course – a great job levelling a signal, and adds more harmonics the harder you hit it. The trick with the SSL Bus Compressor is hit compression with your peaks in your finished mix, e.g. the kick drum. What happens is that the SSL „grabs“ and reduces the peaks in a very clever way, while adding harmonics to them. The SSL bus compressor controls the dynamics and makes the bits it compresses more punchy by enriching it with harmonics (almost like compensating for the lost level). This effect has widely been described as mixbus-“glue” and the reason why everybody loves SSL Bus Compressors.

SOLID STATE LOGIC SSL E-SERIES CHANNEL COMPRESSOR (1977)

The SSL Channel Compressor, also a part of the Waves SSL-bundle, adds a healthy portion of bright and agressive harmonics, and is capable of controlling and “grabbing” percussive signals like no other compressor. Widely used by famous mix engineers on Kick, Snare and any type of percussion – the SSL gives drum sounds a prominent place in the mix, makes drums punchy and cut through.

SUMMIT AUDIO TLA-100A (1984)

The Summit TLA-100A is a very subtle tube compressor. The original analogue hardware has been used by engineer Al Schmidt as a tracking compressor on many of his Grammy-winning projects, for example on Diana Krall’s vocals, to catch some peaks with light compression during vocal recording. The Summit adds some harmonics when you drive it, and works well on a wide selection of sources with 3 easy settings for both attack and release-time. A very subtle leveler for tracking and mixing.

LOGIC PRO X COMPRESSOR (PLATINUM MODE, 1996)

This is the original compressor plug-in that came with the first version of Logic, so the design goes back to the mid-90s. It’s actually the ONLY plug-in in our test that does not colour the signal AT ALL. Contrary to all other compressors tested, this is a compressor suitable for applications where you want to iron out the dynamics of a track without adding harmonics. The later versions of this plug-ins (like the current one in Logic Pro X) added a few more modes you can select, and when you switch from the “Platinum Mode” in any of the others (like “VCA”), the plug-in starts adding harmonics, trying to mimic some of the compressors I just introduced.

u-he PRESSWERK (2015)

If you don’t have a huge collection of outboard and/or plug-in compressors, you can start with just one that delivers a broad range of application. I personally very much like u-he PRESSWERK which is currently the only compressor plug-in on the market, where the amount, dynamics and shape of the added harmonics can be set independently from the amount of compression. If you look at the block diagram, you will see that PRESSWERK unites all the features and topologies found in classic 1950 – 1970 compressors, while giving the user full control over these features and clear labels to access these in detail. I can see PRESSWERK becoming popular in audio schools, as there’s no other plug-in that will make it that easy to educate someone about the details of ALL vintage compressors in one plug-in. All of the emulations I’ve tested earlier in this chapter, except one, generated harmonic distortion, and these would usually increase in level the more gain reduction (= compression) you’re applying. With PRESSWERK, you have total control over the amount of harmonics that are added, and the DYNAMICS-control can seamlessly adjust between „depending on the volume of the source material“, and „harmonics always added“.

The basic harmonics added look somewhat similar to these of a classic UREI 1176 blue stripe, but the various controls of PRESSWERK’s saturation section can highly customize them, for example apply them either PRE or POST compression, statically or following the dynamics of the material, and you can also balance the spectrum of the harmonics slightly using the COLOR-control which is essentially a „tilt“-type filter/EQ. When you watch the clip, pay attention at me playing with the Saturation-section of PRESSWERK starting at around 1:11min. You can download a demo of PRESSWERK HERE: Saturation is of course something that occurs - more or less - in all analogue signal processors, not only compressors. It’s worth mentioning a few more „stand-alone“ boxes/plug-ins that can be used to generate harmonics via saturation.

Tape Saturation

Reel to reel analogue tape machines - remember those? A magnetic recording „head“ magnetizes a magnetic tape that can run at different speeds. Analogue tape, when pushed with extreme levels, doesn’t distort the same way as a digital converter does. The more you enter the „red zone“ of a tape machine, you get an effect called „tape saturation“. The sound of tape saturation varies of course with the type of tape recorder, width of tape, and tape speed. The classic analogue studio standards by STUDER or AMPEX are still loved for their sound, but are rarely ever used these days. Plug-in emulations have come a long way and do a really good job getting this type of sound in your DAW.

Mic Pres

Microphone Amplifiers or short „mic pre’s“ come in flavours similar to compressors (minus the gain reduction circuit) - they can be build with tubes, transformers, transistors or modern OP-amps, or a mixture of all of these technologies. Some of them can also be used at line-levels, and also make a great colour in your palette - especially if you do vocal recording as well. The various Neve-models come to mind, they are amongst the most legendary studio classics. While the original versions are extremely costly, both AMS Neve and various other companies build stripped-down alternatives such as 500-series modules. The classic Neve 1073 is also widely been modelled as a plug-in.

Magic Chains

As you expand your knowledge about processing and plug-in chains, I recommend that whenever you have found a combination of settings that works in a mix, save the entire channel strip as a plug-in chain in your DAW. Make sure the name you’re using reminds you about what the application for this preset could be. Over time you will develope a library you can always go back to and further refine.

Reverbs and Delays

Most producers have a very basic set of Reverb Sends and Returns in their DAW sessions: a standard reverb (mostly plate or concert hall), one delay for throws, and maybe a small room simulation with early reflections. All the tracks in the arrangement are accessing the same reverb and delay, if needed. Thats totally fine for arranging and producing, but forget that concept for your mixing-session. If you want to create an impression of a 3-dimensional mix, with lots of space and separation, here are some things you need to think about: you need a huge palette of different reverbs and delays, add to that some modulation FX the reverb you end up with will be a mixture of several reverbs that differ in many ways, from colour and density to reverb time the most important tracks in your mix are accessing completely different reverb and delay sends and returns you need to have banks of different reverbs ready in your DAW template, specialized for different sources like drum room, snare, toms, lead vocals, pads, lead synth, orchestral instrument group the reverb returns together with the instruments they belong to - if you have a drum subgroup or VCA group, the reverb returns for the drums belong to that, and exclusively to that also, you don’t need to create a new send for every reverb - you can send, for example, to 20 different snare reverbs from „Send 5“, and then just unmute and mute them at the aux return, one at a time, until you find what you’re after; this is a very fast and intuitive process, and gives you the option to use one or several reverbs at the same time, and level their balance at the different reverb returns for subgroups or VCA groups to really work, FX belong mostly exclusively to their sources - if you mute the drum group, you mute all drum reverbs with it, but of course NOT the vocal reverb it’s essential to stay very organized in that respect All of the above probably sounds my mixes are drowning in reverb. Don't be fooled - some of these are very subtle. Also, don't be afraid of long reverb times, but keep them super low in level. Many of these you only hear when you switch them off. Here are some examples for types of reverbs that are useful to have at hand, all at the same time: most reverbs deliver sounds in categories like plate, chamber, ambience, rooms, concert halls, church, etc. most classic reverbs (e.g. Lexicon 224, 480, EMT, AMS RMX) are great at less defined, „cloudy“ reverbs, regardless at which reverb time modern reverb plug-ins and hardware reverbs (Bricasti M7 is my favourite) are great at super-realistic room simulation plug-ins using IRs (Impulse Response) can do both - IRs are basically just samples or „fingerprints“ of the original reverbs or even real rooms What I personally have done over the years was simply try a lot of different reverbs and especially IRs, and every time I heard something that I liked, I saved the channel strip settings, reverb, EQ and some compression, to a channel strip preset in Logic. Doing this will allow you to keep developing a go-to library, and certain reverb chains you can not live without after a while, and these make it to your DAW mix template. Delays also come in many different flavours: the typical 1/2, 1/4 or 1/8-note delay „throw“, often used on the last word or syllable of a vocal-line. short slap-back delay complex delays with polyrhythmic patterns, or even unpredicted „weird“ stuff happening in the feedback loop Reverbs and delays are to your mix, what a shadow is in a photography, you don’t want them as sharp and shiney as your main object. Most of my delay returns are sending to reverbs as I don’t like them to be a totally dry „sampled“ copy of my original signal. Look at reverbs (and delays) similar to a shadow in a photography, you don't need them 100% upfront, shiny and dynamic, their purpose is to give you a sense of subliminal depth. Compressing them helps that a lot. I also EQ them contrasting to the direct signal for that reason, so if you've worked hard to make your lead vocal upfront, direct and "in your face", you don't want to send your reverb from that direct signal. BTW, this is one reason why older "vintage" reverbs/delays are still popular - they have a low-res, "grainy" feel about them (example Lexicon 224 or AMS DMX1580). Compressing reverbs also makes them more controllable - you can get away with less reverb in the mix, if you compress the reverbs that you are using.

Creating a three-dimensional sound

There are obviously a lot of small details involved, but for a start, get your head around the following concept: important stuff goes to the center: Kick, Bass, Lead Vocals try panning everything else hard left or right Sounds crazy I know. Try panning a guitar hard left, send it to a small room and pan the room hard right. Now bring the reverb a tiny bit more to the center until it feels well-balanced. The second guitar you might have: pan the dry one hard right, send it to another small room (or delay or chorus) and pan the effect hard left. You get the idea… this works for guitars, keyboards, percussions, noise fx, etc. - I use small rooms/chambers/spaces from the Bricasti M7 reverb for that. There are Impulse Responses for the Bricasti out there - for free, and officially authorized by the guys who designed that reverb.

Modulation FX

re: rhythmic gating, the Tremolo (modulates only volume) in Logic Pro is a secret weapon. I use this all the time on lifeless or over-compressed tracks - brings some subtle movement into them. From pads to static synth basses to heavy guitar chords to backing vocals. You can even make reverbs groove subtly with the beat.

Subgroups, FX Busses and Routing

As the number of different reverbs (delays, etc.) you are using grows, you need to keep the routing strictly organized. Using the same reverb on several sources quickly becomes messy. You might need to EQ or compress your main reverb in a certain way to improve its sound on the vocals, but as you have used that same reverb on other instruments you are suddenly affecting the balance and sound of your entire mix. I know it sounds counter-intuitive, but you’re better off separating the reverbs you are using for the different instrument groups, which includes the routing and grouping. On my console, I have a pair of channels dedicated to vocal reverbs, another pair for drum reverbs, and they are assigned to their respective VCA groups. On the drums use pre-fader sends, use the drum fx exclusively for drums, and route their returns to the drum bus as well, and/or put them on the same VCA group. Same goes for parallel compression busses, the sends to them are pre-fader, not following automation, but the returns follow the automation of the source - I do that via the Drum VCA group - you get the added bonus that you can treat the FX returns for each group of instruments separately. We want total control in mixing.

Stereo Bus Magic?

A very popular question… „What’s on your mix bus? Are you using PRODUCT X or PRODUCT Z like I do?“ „All in one“ mix bus processors promised to be the one stop solution since the first TC Finalizer that came out in the 90s. You all know the various modern plug-in equivalents of that, and to a certain degree, they work really well. A quick preset can make a song demo more presentable. Things that are involved in these processors are multiband-compression, stereo processors for more width, psychoacoustic loudness treatment, and of course EQs, compressors, limiters, etc. The disappointing truth is that you really need to make your mix happen before the signals gets to the stereo bus. If the mix is rotten at the core - and I refer you to all the things we’ve looked into from the Chapters 3 to this point, multi-band compression can create more density and loudness, but never solve problems that were ignored in the first place. With that, let’s go through my common mix bus signal chain, and allow me to add that every single processor used here is doing only extremely subtle things.

1. Loudness Meter

From the very start, embedded in my template, the first plug-in on my mix bus is a loudness meter - it monitors the level of the signal that comes in. We talked about Gain Staging in Chapter 5, so you know what to look for. Also, at this point let me remind you about best A/Bing practise as discussed in Chapter 3: we can A/B between a treated and untreated version of our mix bus, at matched levels. This is essential when working with plug-ins on the mix bus - you might not need ANYTHING here.

2. blank

I’m not joking. It’s a blank plug-in slot with no plug-in inserted. Leave it like that because if anything comes up, you can insert whatever you might need later on.

3. Midrange

The combination of plug-ins I use here is EXACTLY the same that I’m using on vocal-chains: A. Pultec MEQ 5 Remember, this was where on the vocal chain, a Pultec is boosting when „warmth“ is needed. Test if our mix bus lacks anything between 200Hz and 700Hz by switching through the frequencies. Don’t boost anything just because you can. Try the same with the upper band as well - sometimes a subtle boost between 1.5k and 7k adds some energy. On all accounts, I’m talking about a +2 or +3dB boost here at best - which is still a very subtle amount on the Pultec. B. Tube compressor for tone I use a plug-in version of the classic Fairchild here, and settings could be called „esoteric“. The gain reduction-needle is hardly ever moving, but when I A/B, it ALWAYS sounds better with the Fairchild in the chain. The category here is mid-range, as it adds harmonics.

4. Final Tone and Dynamics Control

A. EQ Again the Pultec EQP-1a (don’t confuse it with the MEQ used in step 3!) does a subtle boost here, usually 2dB at 20Hz, and I attenuate treble by 2dB at 20kHz. Another very „esoteric“ setting, the Pultecs on my mix bus are mainly used to add an analogue vibe. B. Final Dynamics I happen to use Slate FG-X as a final bus compressor, and also use the Transient, ITP and Dynamic Perception Controls to fine tune. I am NOT using FG-X as a brickwall limiter, and my signal is leaving this plug-in with the same amount of headroom left as it’s entering it. The „constant gain monitoring“-button is always pressed for that reason. C. Linear Phase EQ This is my final control for the overall frequency curve of the mix. I usually add a very broad and subtle boost on the bass. And one more time: keep going back between these building blocks for fine tuning. Watch the gain staging! Automation Let’s go back to a concept that was introduced in chapter 7 - correcting and automating levels PRE and POST plug-in chain. PRE is to create a consistent natural sounding performance, which we already dealt with. The automation we are now working on is the POST plug-in chain automation that creates your dynamic for the song. Of course, even once you start activating the fader automation in your DAW, there will be situations where you need to change the entire relative volume of the track. A simple gain plug-in before the automation fader gives you easy access to the relative level of the entire track - which is a lot easier than always correcting the entire automation. The way this is handled differs slightly between DAWs, and console automation might have other ways to deal with relative levels. Make sure though to find out how to independently deal with relative levels and automation, both POST plug-in chain of course. Another issue we have already dealt with in Chapter 7, but I’ll repeat it here again - if one instrument has completely different levels or settings in different songparts, instead of creating automation for that, just copy the channel settings and have an extra channel for different song-parts. And… another reference to an earlier chapter: automation is best performed or programmed while listening at low levels on your small portable speaker. Automation is a great place to emphasize the natural dynamics of the song. Let’s face it - on a rock-song, the drummer will hit the drums harder on the last chorus. Keep in mind that this is not totally depending on levels, as more intense drumming will reflect in brighter drum tones. But still, a subtle push in volume on the final chorus can create extra excitement. This is where the final balance and automation of the mix can be compared to car racing, and you will face the same dilemma as the race car driver: you want to drive as fast and risky as possible, without destroying the car. Luckily, with a few safety features in place, automating a mix is still a very enjoyable ride (and much less dangerous compared to car racing). These are the safety features: • if you have „nulled“ your faders, like described in Chapter 7, and maintained solid gain staging throughout adding treatment of the individual channels, you have a „unity gain“ default position for your faders, which means there is zero chance to destroying the solid balance you have already created up to this point - you can always go back to start. When you work your automation around the 0dB position of your fader, you have a much broader range to do subtle fader movements. 3dB is still a bit of movement around the 0dB-point - try to perform a 3dB increase when your level sits at -30dB default! Impossible. That goes for both physical faders and visual automation data in your DAW • oh yeah, you have of course saved your project under a new version number on a regular base - so that you can always go back in case your mix gets worse • the stereo bus treatment we’ve setup in this chapter will make sure that your dynamics stay within a certain range. If your bus-compressor has the right setting, it will subtly keep your dynamics within the frame of good gain staging - in other words, if you push signals into the stereo bus with more level, the increased level at the channels will partly be compensated for by the ratio set on the bus compressor. • and one more last time: your portable speakers at low levels will assure that you’re staying in the right frame with your automation With that - go and create excitement in your automation! Go for bold moves! Definitely start learning to automate with faders. Drawing automation with a mouse will never replace your hand on a fader while closing your eyes and listening to the song at low volume. As a final note on automation, know that you will also have to go back and forth between setting levels and EQing. Whenever there is a situation where you feel the track sits great in the mix, but the bottom is to thick or thin, you know where to find the handle for that. The levels of the track are right when you can even hear very subtle level-changes. As long as you can still move the levels of an instrument up and down a few dB, and it makes no difference for the mix, you need to go back to the start of this chapter and use tools other than level to make the signal sit right in the mix.

Reverb Culture - using Reverbs and Delays in the mix.

Most producers have a basic set of Reverb/Delay Sends and Returns in their DAW sessions: a standard reverb (mostly on a 3 sec plate or concert hall setting), a delay for throws (with feedback for repeating echoes), and maybe a small room simulation with early reflections. All the tracks in the arrangement are accessing the same reverb and delay, if needed. Thats totally fine for arranging and producing, but forget that concept for most mixing-sessions. I’ve called this sub-chapter “Reverb Culture” as there is a million things to explore when it comes to reverbs and delays. The more you explore the subtleties of different types and colours of reverbs, you want to start using them in your mix. You can use reverb to improve the “front - back” dimension in your mix, placing certain instruments more distinct on a virtual stage, creating contrast between the upfront/direct and the more unobtrusive elements in the mix, and much more - reverbs offer a microcosm of endless possibilities.

A little bit of reverb history

Hundreds of years ago, classical composers thought about how to set up the musicians on stage, and the traditional setup of an orchestra is arranged in a way that certain instruments have more room ambience added as they are further in the back. A good example would be brass and timpani, which are seated at the back of the orchestra, behind woodwind and strings - you wouldn’t want them to blast their sound into the first row of the audience! Concert halls were designed and built for orchestral performances, and the first recording studios were modelled after concert halls. To this day, the “concert hall” setting is the most standard reverb you would add to a dry signal in a mix, adding the dimension of a natural sounding to an instrument recorded with a close mic, or an electronic instrument that doesn’t come with natural ambience. Not every recording studio had the size of a concert hall, just look at Motowns legendary “Hitsville U.S.A.”-studios, so in the 1950s all of them were looking for ways to implement artificial reverb and echo. The first solutions were dedicated “reverb chambers” using the send/return approach we know to this day - you’re sending the signal for which you need reverb from the mixing console to a speaker in a “reverb chamber”, and one or several microphones pick up the “wet” signal which can be added to the dry signal via the (effect) return of the console. This is how Bill Putnam used it first in 1947 and of course his United Recording Studios had and to this day still have several great sounding reverb chambers. The Hit Factory/Power Station/Avatar-Studio in New York (yes it kept getting new names!) had a famous five storey staircase which was used for reverb which shows you that the top engineers were always finding creative ways to implement room-sounds into their mix.

Got tape delay?

Analogue tape recorders were the first devices used to create echo aka delays: the physical distance between the magnetic record and playback-head of a tape recorder would allow to hear a signal from tape that was just recorded a fraction of a second before. By varying tape speed and distance between record and playback-head, almost any length of delay could be created (sometimes 2 tape recorders were placed away from each other for long delays). Not to mention that by varying the speed of the tape recorder you could modulate the delay, and a variation of this technique would even create a double tracking effect for vocals which could create delays that arrived BEFORE the dry signal. So not every tape recorder in an ancient recording studio was a master recorder. Abbey Road in the 60s and 70s for example had batteries of recorders JUST for tape delays. The short delay produced by the tape recorder playback head at high speeds was also ideal to create a pre-delay for reverbs. A little pre-delay would help to distinct the dry and reverb signal from each other, and is of course one of the important parameters to look into when setting a reverb in your mix.

Classic reverbs

A lot of creativity went into developing devices that would create reverb, and while some of them failed in creating a natural sounding reverb (spring reverb!), they all left their imprint on recording history, and we are sometimes after exactly these sounds: - spring reverb (capturing vibrations of a metal spring via a transducer and a pickup; invented by Laurens Hammond in the 1930s and used in his famous Hammond organ) - chamber reverb (using a soundproofed room, and first used by Bill Putnam in 1947, as discussed above) - plate reverb (capturing vibrations of a large sheet of metal, made famous by German Wilhelm Franz of “Elektro-Mess-Technik” aka EMT in the 1950s) In the 1970s, the first generation of digital reverbs was introduced, starting with the EMT 250 (1976), the Lexicon 224 (1978), AMS RMX-16 (1981), followed by many more, and brought to mass-market with reverbs like the Yamaha REV7. This first generation was refined in later units like the Lexicon 224XL, 480L, 300, PCM-60 and PCM-70, which are now considered classic reverb units. If you’re thinking about buying one of those - any of these units keep their resale value, but there are less than a handful people in the world who can fix these if anything goes wrong. By the end of the 1990s, plug-in reverbs and DAWs were making a real impact on the market. Digital reverb algorithms were now implemented to plugins, and the technology of impulse responses makes it possible to record a “fingerprint” of a room sound, or any hardware reverb, and replicate it via an impulse response reverb plug-in. The most famous one is Altiverb, but these days almost every DAW comes with one (list them). While it is fun to work with original hardware, and there are also great algorithmic reverbs, you can cover most of your needs with impulse response reverbs, but you will of course have to build a library of great impulse responses, and learn how to use them. Most sought after Reverb Classics AMS Neve: RMX-16 EMT: 140 Plate, 250, 251 Eventide: SP-2016, H8000 Klark Teknik: DN-780 Lexicon: 480L, 224 (224X, 224XL), 300 (also as 300L with the “LARC”-Remote Control known from the 480L), PCM-70, PCM-60 Quantec: QRS Room Simulator Roland: R-880 Sony: DRE-2000, DRE-S777 (Hardware Impulse Response Reverb, 1999), DPS-V77 Yamaha: REV1, REV5

Bricasti M7 - a 21st century classic

When nobody thought a “new classic reverb” was possible, two former Lexicon-engineers created the Bricasti M7 reverb. Up to this point, the Lexicon 480L was the de facto “industry standard” for professional reverb. It’s successor, the Lexicon 960L, is very powerful, but had disappointed many people who got used to the 480L. Along came the Bricasti M7, and most people pretty much immediately were sold on it. If you really want a hardware unit, this is the one to get.

Your setup for a 3-dimensional mix

The sound of any reverb is highly dependent on the sound source you feed it with. Don’t rely on the same “instant settings” for every mix. What you can do though, is to create a lot of options that can be accessed quickly. The CPU-power of modern computer DAWs makes it easier than ever to access batteries of different reverbs that are set up in a template and can instantly be accessed. If you want to create the impression of a 3-dimensional mix, with lots of space and separation, here are some things to consider: • you need a huge palette of different reverbs and delays, add to that some modulation FX • the reverb you end up using in the mix with will be a mixture of several reverbs that differ in many ways, from colour and density to reverb time • the most important tracks in your mix are accessing completely different reverb and delay sends and returns • you need to have banks of different reverbs ready in your DAW template, specialized for different sources like drum room, snare, toms, lead vocals, pads, lead synth, orchestral instrument • group the reverb returns together with the instruments they belong to - if you have a drum subgroup or VCA group, the reverb returns for the drums belong to that, and exclusively to that • also, you don’t need to create a new send for every reverb - you can send, for example, to 20 different snare reverbs from „Send 5“, and then just unmute and mute them at the aux return, one at a time, until you find what you’re after; this is a very fast and intuitive process, and gives you the option to use one or several reverbs at the same time, and level their balance at the different reverb returns • for subgroups or VCA groups to really work, FX belong mostly exclusively to their sources - if you mute the drum group, you mute all drum reverbs with it, but of course NOT the vocal reverb • it’s essential to stay very organized in that respect All of the above probably sounds like mixes that are drowning in reverb. Don't be fooled - some of these can be very subtle. Also, don't be afraid of long reverb times, but keep them super low in level. Many of these you only hear when you switch them off. Here are some examples for types of reverbs that are useful to have at hand, all at the same time: • most reverbs deliver sounds in traditional categories like plate, chamber, ambience, rooms, concert halls, church reverbs, etc. • most classic reverbs (e.g. Lexicon 224, 480, EMT, AMS RMX) are great at less defined, „cloudy“ reverbs, regardless at which reverb time • modern reverb plug-ins and hardware reverbs (Bricasti M7 is my favourite) are great at super-realistic room simulation • plug-ins using IRs (Impulse Responses) can do both - IRs are basically just samples or „fingerprints“ of the original reverbs or even real rooms What I personally have done over the years was simply try a lot of different reverbs and especially IRs, and everytime I heard something that I liked, I saved the channel strip settings, reverb, EQ and some compression, to a channel strip preset in Logic. Doing this will allow you to keep developing a goto library, and certain reverb chains you can not live without after a while, and these make it to your DAW mix template.

Flavours of Delays

Delays also come in many different flavours: • the typical 1/2, 1/4 or 1/8-note delay „throw“, often used on the last word or syllabel of a vocal-line. • short slap-back delay • super-short microdelays to widen mono signals • complex delays with polyrhythmic patterns, or even unpredicted „weird“ stuff happening in the feedback loop

Philosophical Considerations

Reverbs and delays are to your mix, what a shadow is in a photography, you don’t want them as sharp and shiney as your main object. Most of my delay returns are sending to reverbs as I don’t like them to be a totally dry „sampled“ copy of my original signal. Look at reverbs (and delays) similar to a shadow in a photography, you don't need them 100% upfront, shiny and dynamic, their purpose is to give you a sense of subliminal depth. Compressing them helps that a lot. I also EQ them contrasting to the direct signal for that reason, so if you've worked hard to make your lead vocal upfront, direct and "in your face", you don't want to send your reverb from that direct signal. Compressing reverbs also makes them more controllable - you can get away with less reverb in the mix, if you compress the reverbs that you are using. BTW, this is one reason why older "vintage" reverbs/delays are still popular - they have a low-res, "grainy" feel about them (example Lexicon 224 or AMS DMX1580). Same applies to the character of “tape delays”: because each feedback loop feeds the signal back to the tape head, the saturation effect of analogue tape is very obvious here.

Creating a three-dimensional sound

There are obviously a lot of small details involved, but for a start, get your head around the following concept: important stuff goes to the center: Kick, Bass, Lead Vocals try panning everything else hard left or right Sounds crazy I know. Try panning a guitar hard left, send it to a small room and pan the room hard right. Now bring the reverb a tiny bit more to the center until it feels well-balanced. The second guitar you might have: pan the dry one hard right, send it to another small room (or delay or chorus) and pan the effect hard left. You get the idea… this works for guitars, keyboards, percussions, noise fx, etc. - I use small rooms/chambers/spaces from the Bricasti M7 reverb for that. There are Impulse Responses for the Bricasti out there - for free, and officially authorized by the guys who designed that reverb. To spread stuff out, you can use a super-short delay on some instruments. Logic Pro X has a very useful plug-in called “Sample Delay” that is perfect for that. Leave one side at 0 ms (original signal bypasses), for the other side try settings under 1000 samples (thats at 44.1k Sample-Rate), values from 300 - 800 work great. Check back and forth in mono, to make sure it doesn’t sound too weird.

Subgroups, FX Busses and Routing

As the number of different reverbs (delays, etc.) you are using grows, you need to keep the routing strictly organized. Using the same reverb on several sources quickly becomes messy. You might need to EQ or compress your main reverb in a certain way to improve its sound on the vocals, but as you have used that same reverb on other instruments you are suddenly affecting the balance and sound of your entire mix. I know it sounds counter-intuitive, but you’re better off separating the reverbs you are using for the different instrument groups, which includes the routing and grouping. On my console, I have a pair of channels dedicated to vocal reverbs, another pair for drum reverbs, and they are assigned to their respective VCA groups. On the drums use pre-fader sends, use the drum fx exclusively for drums, and route their returns to the drum bus as well, and/or put them on the same VCA group. Same goes for parallel compression busses, the sends to them are pre-fader, not following automation, but the returns follow the automation of the source - I do that via the Drum VCA group - you get the added bonus that you can treat the FX returns for each group of instruments separately. We want total control in mixing.

VCA Faders – Balancing the balance.

Balancing the instruments of your song is the essence of mixing. You have a bunch of faders – one for each instrument – and you move ‘em all to a position where your mix sounds great. Easy, right? Well, in theory, thats correct – but there are a bunch of issues that are in the way, such as the following ones, all of which can be resolved by the use of VCA Faders and Groups. Different setups, different problems, same solution. Just a few examples of where VCA Faders and Groups can provide help: • “I don’t even have a mixing console. I do this all in the computer with a mouse.” • “I have a DAW controller with faders, but there is always more tracks than faders.” • “I have the largest SSL console available. But fader 1 and 96 are too far apart. And I only have two hands.” • “The further I progress with my mix, the louder the mix gets, often to the point of distortion. At that point I need to pull all of them down by the same amount. As I’m trying this, the balance also ends up different and I have to start over again.”

Introducing VCA Faders – structure for your mix.

A quick example. You can start a mix by balancing only your drums, then assign all of these drum instruments to the same VCA Fader. Now you are able to control all of the drums with just one fader. The balance you have set will stay intact, and should you need to turn an individual drum instrument up or down, you can still do that, at any time. As a next step, you might add various synthesiser sounds – pads, a lead riff, some melodies – until you got a good balance. The synths are communicating well which each other but once you add vocals you find that the synths are too loud and the drums are collectively too low. At this point you control each group consisting of drums, synths and vocals with just ONE VCA fader each. It definitely makes the mixing process a lot easier. Essentially, a VCA FADER is a fader that controls a group of individual faders, and this VCA GROUP has its own balance within itself. And while the VCA Fader changes the level of each fader in the group proportionally, you can still access the individual faders within the group and make changes to the balance within the VCA GROUP. Essentially, a VCA FADER is a fader that controls a group of individual faders, and this VCA GROUP has its own balance within itself. And while the VCA Fader changes the level of each fader in the group proportionally, you can still access the individual faders within the group and make changes to the balance within the VCA GROUP. Here’s a quick video of VCA faders in ProTools 12. If you work with Avid ProTools, Apple Logic Pro X or Steinberg Cubase 8, your DAW has “VCA Faders” or “VCA Groups”. All of these DAW programs have added VCA Faders to the feature list in recent updates. The confusing thing is that earlier versions of Cubase, Logic and ProTools had features that looked similar, but didn’t provide the exact same feature-set as VCA Faders. In other words – the software makers didn’t get it quite right the first time around, and they’re now fixing an old mistake. Unfortunately this shows that the inventors of these programs didn’t really get input from professional mix engineers in the beginning. Ask any pro who has worked with a large analogue console – VCA Faders and Groups were one of THE BIG advantages these always put them ahead of mixing “in the box”. Like many features we enjoy in todays digital DAW world, the functionality of VCA Faders go back to an invention from the analogue world which was essential to the development of early synthesizers like the Minimoog. To fully understand what is going on, let’s get into the details of the underlying analogue technology. What is a VCA? History + Terminology VCA = Voltage Controlled Amplifier A VCA is an analogue circuit – an amplifier whose volume is controlled by an incoming control voltage.

VCA as used in analogue synthesizers

In a classic analogue synthesizer, for example a Minimoog, the VCA is controlled by an envelope generator – I’m confident you’re familiar with the classic “ADSR”-envelope generator. ADSR stands for “Attack, Decay, Sustain, Release” and describes the volume of a note once triggered by a key on the musical keyboard of an analogue synthesizer (or for that matter, by an incoming MIDI note).

VCA circuit as used in analogue consoles

While the volume of a channel in a mixing console can easily be controlled by running the audio directly through a fader in the mixer which acts as a variable resistor, the manufacturers of professional mixing consoles wanted a more flexibel way to control the volume of each channel. Thats why they added a VCA circuit to each mixer channel (and also the master bus). The VCA circuit could be controlled by any device that would generate the necessary control voltage for the VCA circuit. For example, this could be a range from 0V to 5V which controls the volume from silence to full level. The voltage can be provided from your traditional channel fader, or alternatively by a mix computer that records fader movements and plays them back. The classic SSL mix computer of the E/G-Series consoles is based on this technology. Both the mix computer and the fader of each channel can send a control voltage to the VCA (which is a circuit board located on each mixer channel). The heart of the VCA circuit is a VCA chip. The ones used in the classic SSL consoles were originally manufactured by dbx, and later replaced by VCA chips of manufacturer THATS. If you’re a DIY-nerd, you can purchase those integrated circuits in online electronic shops like Mouser. Different generations of these VCA chips are said to have different audio characteristics, although in theory we want them to sound as neutral as possible. VCA Faders On a classic E/G-Series SSL-Console, EVERY large fader is a VCA-Fader. Audio NEVER passes through the faders – the faders just send a control voltage for the VCA, which can also be recorded to the mix computer in real-time. The mix computer essentially has an A/D and D/A converter which translates and stores the changing voltages for each SMPTE frame of the music’s timeline. Similar to what a DAW does today – only that it records control-voltages instead 24bit audio – not bad for a 1977 computer, right?

VCA Groups

In addition to a fader for each channel, the classic large SSL consoles always had 8 extra faders that control 8 VCA groups. Each large channel fader could be assigned to one of these 8 groups which would (via the VCA group fader) act as a master control for all the channels assigned to each VCA group. Again, no audio signal ever passed through these faders – all they do is provide a control voltage that controls the voltage of the channels according to how they are assigned to the 8 available VCA groups. But there’s a super-tricky thing the SSL console developers invented for the VCA group faders… when the group fader is set to default (= 0dB), they add 0V to the VCA circuits of the assigned groups. If you turn the VCA group fader higher, it adds voltage (= volume) to the channels, if you turn the VCA group fader lower than 0dB, it sends a negative voltage which means it subtracts from the control voltage so all assigned VCAs reduce their relative volume by the same perceived amount. Very simple but effective! VCA Trim The use of VCAs in analogue consoles would solve another very common problem every user of analogue consoles would frequently run across. Remember how you sometimes end up pushing all faders higher and higher, and at one point overload your master bus? This is where the “VCA trim” comes into play… it’s basically just one knob that globally adds or subtracts voltage to ALL VCAs in the system.

When to use what.

Audio Subgroups The most basic audio subgroup every mixer and DAW software has is the Stereobus. But more commonly, we associate audio subgroups with a routing where a number of tracks are submixed before these different groups get summed together in the Stereobus. Audio Subgroups are great for situations when you want to “glue” various elements together with processors like compressors and saturators, or you want to apply similar reverbs on all of them at the same time. A Drum-Subgroup is a great example for the “glue”-situation, where as a subgroup for Backing Vocals is very convenient to add the same kind of Reverb FX on all of them at once. If all you want to do is have one fader for ALL the drums, or ALL the vocals, but your not planning on any “group” processing, you’re better off using VCA Faders. Fader Groups When several faders in your mixer are “linked”, many DAWs call this a “Fader Group” or “Fader Link”. The individual faders can have different levels, but they are all linked. When you bring one fader up, the others are brought up as well, proportional to their original settings. Many early DAWs had this feature, but as soon as you wanted to change one of the faders within the group, but not the others, you had to temporarily remove this one fader from the Fader group, change the level, then add it to the group again. Which is of course doable, but not convenient and definitely preventing “intuitive” changes “on the fly”.

VCA Faders

VCA Faders are very similar to “Fader Groups” with the only difference being that you can change the individual level of every fader at any time while they still remain linked to the “fader group”.

How to create a VCA-Fader stack.

Logic Pro X Select the tracks you want to group to a VCA fader. Then use “Create Track Stack”. Now you pick “Folder Stack” for the VCA function.” Summing Stack” will create an extra bus for summing. ProTools 12 First you have to create a new track. Select “VCA Master” to create a new VCA fader. Then you can add audio tracks to you session. Finally group them to the VCA fader you already set up. Cubase 8 Open the mixer window and select the track you want to group to a VCA fader. Then right-click on them and pick “Add VCA Fader to Selected Channels”. The new VCA fader will control the selected tracks now If you found this post helpful, check out the full book which will guide you through the entire mix methodology from DAW preparation to mix delivery, the eBook YOUR MIX SUCKS.]]>
https://www.masteringthemix.com/blogs/learn/how-to-never-stop-improving-your-music-production2017-12-02T16:50:00+00:002018-01-09T23:23:06+00:00How to never stop improving your music productionTom Frampton

You might not be improving much in your music production even though you’re working hard at it. Why is that? And what can we do to overcome that stagnation?

A while back, I felt like I had stagnated in the quality of my mixing and mastering. I had been improving rapidly for years, but then the rate of improvement just seemed to slow down.

A brutal reality is that you too might not be improving much in your music production even though you’re working hard at it.

Why is that? And what can we do to overcome that stagnation?

When we’re making music, whether we’re producing, mixing or mastering, we’re trying to get things done as best as we can. We’re often in a high-stakes situation where we’re working on getting the best sound for a client, or finalising a track we’ve poured our heart and soul into.

This is known as the performance zone.

Goal is to do as best as we can.

Focus on perfectly executing our known skills.

Aiming to minimise any mistakes by taking less risks.

To become better in the performance zone, we should spend more time in the learning zone.

Doing activities specifically with the goal of improvement.

Focus on improving our known skills and learning new ones.

Mistakes are expected.

No matter how good you are today, you CAN still improve. Get into a mindset where you believe you can and truly want to improve.

I was confident that my mixing and mastering was of a great standard… BUT I wasn’t ready to succumb to complacency.

What worked for me?

I decided to spend time studying why my favourite mixes sounded so great. I would try and identify how my mixes were different. When I first begun this process I was using a chain of plugins to get certain readings to help me analyse the tracks. This process became part of the inspiration for our plugin REFERENCE, so I’ll explain to you my new streamlined version of how I attempt to improve my sound.

I load up a fresh project (so I don't have any other distractions) and drop in my 3 most recent FINISHED stereo masters.

I then load up REFERENCE on the output channel and drop in a selection of my favourite mixes. At this point I’m not necessarily trying to match the genres. When actually mixing or mastering, it can be more useful to use a reference track in the same genre as the track you’re working on, but for this exercise I’m opening my mind and loosening the rules.

I’ll start with that first master, and compare it to each of the tracks I’ve loaded in. I’ll hit level match all so I can jump between all the tracks and keep the perceived volume equal across the board. The trinity display will tell me how my master differs to the references. I then try to conclude whether my mastering decisions worked for this production or if it could have been positively influenced by aspects present in the reference track.

I’ll start with the overall balance. If the white level lines go above the middle line, then those frequencies are more prominent in my master. If they go below, those frequencies are less prominent. I can take the readings with a pinch of salt if they aren't a similar genre, either way it gives me a great insight. I’ll play around with the amount of frequency bands. I’ll use less bands to get a broader perspective and I’ll add bands for a more precise perspective.

From this information I’ll learn if I pushed the bass too much or made the top end just a little bit too crispy for the production. I’ll get an idea of if my master sounded boxy or muddy compared to mixes that I love. This reflection helps me identify ways in which I might take a different approach in the future.

The next step is to look at how my track compares in terms of punch and compression. If the purple dots in REFERENCE move towards the white level line, those frequencies are more compressed in my master than in my reference. If they move away from the white level line, they’re punchier. Again, I can use less bands to get a broader perspective and I can add more bands for a more precise perspective.

If the purple dots are moving towards the white level lines in the mids and the highs when flicking between all the references then I can determine that I might have over compressed those frequencies in my master. Self assessment isn’t about becoming a perfectionist or being overly critical, but it can beuseful to look back on your work with fresh ears and decide whether you would do things differently today. You can then use that insight in your future productions.

You can repeat this process and it will always be effective…

There are always new techniques being explored and a HUGE back catalogue of all the great productions ever released. I love listening to great music, and with this technique I’m improving my ability to make better mixing decisions.

Conclusion

The way to better music production is to switch between the learning zone and the performance zone. We should aim to deliberately improve our skills in the learning zone and then apply what we’ve learnt in the performance zone. This will help you continuously to grow and improve your skills.

Experiment with new plugins and sounds at a time where you’re not actively mixing or producing a song. Use plugins and synths in ways they weren't necessarily intended.

Your ability to innovate and be creative in your productions is like a muscle. The more you formulate ideas, the easier it will be in the future to quickly find interesting and more effective approaches to making music.

Solicit feedback from other producers you trust and respect.

Reflect on the feedback you’ve received and self assess areas in which you can improve.

]]>
https://www.masteringthemix.com/blogs/learn/producing-music-for-apple-music2017-11-16T13:39:00+00:002018-01-09T21:05:23+00:00Producing Music for Apple MusicTom Frampton
With over 27 million subscribers, Apple Music is a major player in the music streaming industry. In this post I’ll discuss how to master your music to give listeners the best possible listening experience. I’ll go into some depth on the production side of things for the artists who really care about their music and engineers who care about their clients. In my research, I’ve discovered a worrying truth about the way a lot of music is submitted to iTunes. It’s bad.. But I’ll show you how to avoid it and an example of how I successfully avoided it with a track I mastered for a major label.

How to master your music for Apple Music

Apple transcodes the lossless file given to them to AAC (advanced audio codec) at 256kbps. They then stream this file through Apple Music.

During this transcoding process, the peak of the audio will almost always increase. If you’ve mastered to 0dB using the peak programme meter found on the master channel in your DAW, then your music will be digitally distorting when it’s streamed by Apple Music.

You need to leave around -1dBTP (decibels True Peak) of headroom to anticipate this transcoding process. This SUPER SIMPLE step will mean that your music isn’t distorting when it’s reaching listeners. Check out the 15 day free trial of our plugin LEVELS which has a highly accurate true peak meter… 16x oversampling for you nerds. Hit the ‘MFiT [Mastered For iTunes] preset and check your last master to see if it had this issue.

Does Apple Music Normalise Playback?

It’s widely known in the audio community that Spotify normalises tracks to around -14LUFS integrated as a default setting. You can turn normalisation off, but most people probably don’t.

Conversely, Apple Music doesn’t normalise music as a default. Normalisation is a good thing and should be encouraged. It might not be long before Apple Music follows the trends of other streaming platforms enables Soundcheck as the default.

When Soundcheck IS enabled, Apple Music streams audio at an average of -14 LUFS. Some individual tracks can be as loud as -12 LUFS and some a quieter -16LUFS.

SHOCKING DISCOVERY

The loudness wars might be over as far as digitally delivery of music is concerned…But a lot of producers, engineers and labels seem to be a bit slow to accept the transition. When I streamed a long period of chart music without Soundcheck enabled, the music played back at an average of -8LUFS and some tracks had a true peak as high as +2.2 dBTP (grim).

So, most major labels are giving their listeners an over-compressed and distorted listening experience…? Still…? There’s nothing wrong with the way Apple Music delivers audio, it’s the fault of the mastering engineers and labels who submit these masters.

Here’s an example of how it can go horribly wrong…

Rockstar by Post Malone peaks at +0.14dBTP, so it’s clipping when it reaches the listeners ears whether Soundcheck is enabled or not. The damage is done before the normalisation, so the clipping is irreversible. The non-normalised track at its loudest streams at a very un-dynamic -7.6 LUFS (the 4.1 loudness range measured in LU confirms this).

So what happens when Soundcheck is enabled?

The track is reduced to -13.5 LUFS. This brings the peak down to -4.15dBTP. The short term and long term dynamic range are unaffected by normalisation. So here we have around 4dB of potential headroom that could have been used to create a more dynamic and exciting listening experience for Post Malone’s fans. Instead they get an over compressed mix that distorts.

It doesn’t have be like this!

I mastered a track call ‘This Town’ for a Niall Horan (One Direction) remixed by Tiesto. This was a good test as I had accurate technical details of the lossless master file. I mastered ‘This Town’ to -0.5dBTP and -10.5 LUFS. Once Apple transcoded the master it was -0.4dBTP and still -10.5 LUFS.

When Soundcheck is enabled, Apple Music streams this track at -2dBTP and -12.50 LUFS. A much less drastic change than ‘Rockstar’, but still a bit of room for improved dynamics. At the time I was just happy to get this (very dynamic for electronic music) master past the decision makers involved.

So my master of ‘This Town’ plays back a whole decibel louder on Apple Music than ‘Rockstar’. It’s also got a much broader dynamic range and doesn’t distort. Winner!

Conclusion

As music producers and audio engineers, we have a duty to learn our craft as best we can to deliver the best listening experience. A lot of music is still being delivered with easily avoidable technical issues. In an effort to disrupt this trend we have created a standalone application called EXPOSE that can be used by producers, labels, A&R, or anyone with a computer. EXPOSE assess the technical data of audio before you commit to releasing the music. Even people with no music production knowledge will be able to identify technical issues. This will empower decision makers to demand specific changes to ensure technically excellent masters are streamed to audiences.

]]>
https://www.masteringthemix.com/blogs/learn/how-to-write-better-lyrics2017-11-04T22:33:00+00:002018-01-13T22:31:49+00:00How To Write Better LyricsTom Frampton
I understand the difficulty of summoning creative lyric writing in the studio. Sometimes it can take you hours to write just one line that makes you cringe the following day.

I’ve worked with some brilliant lyricists over the years so I thought I’d share some of their top tips with you!

Get exponentially better in the long term

A quick win from a few simple tips isn’t going to secure you a long term career as a great lyricist. So what's the long term plan to separate you from the artists that weren’t willing to put in the effort?

READING!

Poetry, literature, lyrics…anything that speaks to you on an emotional level. Find writers whose words inspire you, and try to understand why their words had such an impact. Just the act of reading and taking in the information will give your brain a creative boost and you’ll subliminally learn techniques that you can pull from when writing lyrics.

Theres a great book called Steal Like An Artist by Austin Kleon which goes deeper into the topic of getting inspiration from other artists. Here’s a great quote from the book:

"If you steal from one author, it’s plagiarism; if you steal from many, it’s research."

Wilson Mizner (1876–1933)

Training Your Lyric Muscle

Great lyricists often keep a journal as a way to materialise and immortalise their ideas. They write down their dreams or how they’re feeling just to channel the thoughts.

Try an exercise where you simply scribble down your thoughts and you try to not stop. This can be difficult to begin with, and a blank canvas can be intimidating. Write anything that comes into your head without judgement and just keep going. If you flex your 'ideas muscle' often, you’ll find your flow much quicker in the studio.

Stay In Your Flow

Lyrics can be the embodiment of matters that are very important to you. So it’s understandable that an element of perfectionism worms it’s way into the process. This is why lyricists can get stuck on one line for hours at a time. A pro tip is to just write the jist of what you want to say on that line and move on. Stay in your flow.

The first draft might end up being the final lyrics. A moment of brilliance where everything just clicked. But don’t be afraid to redraft and see if you can sculpt a better version of what you’ve got. With that in mind, keep hold of all of your works, complete and incomplete. You may find a way to turn an unfinished work in progress into a fully formed piece of art.

Tools and Techniques

A rhyming dictionary and thesaurus can be a great way to give your lyrics some added flavour. You might be endlessly searching for that perfect word and these tools can save you the time and drop them right in your lap. Don’t get dragged into using over-complicated words or phrase. This can be confusing for the listener and distract from the intended message. Simplicity is usually much more effective.

Your audience probably won't read your lyrics, they'll hear them. So don't just write them down during the creative process, you have to say them or sing them to hear how they sound.

]]>
https://www.masteringthemix.com/blogs/learn/making-money-from-music-on-spotify2017-10-19T13:05:00+01:002018-11-02T01:37:36+00:00Making Money From Music On SpotifyTom Frampton
Spotify pays an average of 0.00437p Per Stream, and it represents 69.57% of streaming market revenue share, so it's the most relevant platform to analyse streaming revenues for your music. This post will discuss the following questions:

How many Spotify streams do I need every year to make the equivalent of an average income?

How can I achieve millions of streams without millions of social followers?

How can I maximise profits from Spotify?

How Many Streams?

The Bureau of Labor Statistics reported a median personal income of $865 per week for all full-time US workers in 2017. To earn this an artist (or copyright holder) needs just less than 200,000 streams on Spotify every week. Which is around 10 million streams per year. That might seem like a massive and unachievable number... But with excellent songwriting and production, there is a calculated approach that will increase your chances of getting millions of streams.

How Can I Rack Up 10 Million Streams!?

Self released artist Perrin Lamb made $56k in one year from ONE song streamed over 10 million times. His track ‘Everyone’s Got Something’ was added to the ‘Your Favourite Coffeehouse’ playlist and the streams started rolling in.

The track was released a full year before it was featured on the playlist. This displays one of the most valuable features of the music you create today… It’s shelf life is potentially endless. Your music is always fresh and new to someone, so keep pushing your back catalogue down new avenues that might generate revenue streams.

Here's a quote from Calvin Coolidge to keep in mind when pitching your music:

"Nothing in this world can take the place of persistence. Talent will not: nothing is more common than unsuccessful men with talent. Genius will not; unrewarded genius is almost a proverb. Education will not: the world is full of educated derelicts. Persistence and determination alone are omnipotent."

How To Get Your Music On Spotify Playlist?

If you're confident that your songs are well written and you've used our plugin LEVELS to make sure it will sound great on Spotify, then you can start pitching your music to get featured on a Spotify playlist. I have stereo mastered dozens of tracks that have ended up on playlists with millions of followers. It's a great way to get organic exposure of your material.

Reach Out to Spotify’s playlist curators At the bottom of this post I've listed the top 25 most popular playlists on Spotify. They might not be right for your music but it can give you an idea of what people are listening to. You can learn more about approaching A&R through email from my post on Music Marketing.

Send your tracks to other popular music blogs too. Not only is it more exposure if they select your music but you might also reach more curators.

Email is a great way to reach people, so grow your fan mailing list…This will probably turn out to be more fruitful than focussing on socials. You can then email them when you release a track on Spotify.

Getting Paid

Spotify’s royalty payouts have had a lot of bad press from some major artists. However, streaming services can be lucrative (and seem fairer) when the artist owns the publishing and songwriting rights. Spotify pays out 70% of their revenues to rights holders. If you self release your music through a platform like CD Baby, you can retain a larger slice of the pie than if you sign the track to a label. Sometimes releasing with a label can be the best option due their market reach and leverage with streaming sites. You'll need to decide what is best for you.

I’ll start off by showing you a level matched comparison of the pre-master and my final master. Then we’ll look at the journey the track took to get to that result.

So when a mix and master are level matched, the difference between them might not sound so impressive but it gives you a really accurate representation of the actual changes made during mastering. Let’s have a listen.

0:40 Sounds great. So let’s start at the beginning and take a listen to the mix.

I’m using a mixing preset here in LEVELS to make sure the mix is free of technical issues.

1:42 low end is nice and central.

2:00 Using Bass Space here to make sure there aren’t any unwanted low frequencies in the vocals, guitars and keyboards.

2:36 switching to the Mastering for Streaming services preset here.

2:50 so I can see the LUFS during this chorus is around -22 which is a little low. Well be aiming for about -14 integrated LUFS for this master.

Using the Sonnox inflator here to add some harmonic distortion energy.

The shadow hills compressor here to add a bit of glue and catch some transients.

4:00 Using a multi-band compressor here to zone in on glueing the elements of the mix and shaping the sound in a subtle way.

4:41 Just using the limiter again here to increase the volume to around -14 LUFS.

5:05 The vocals are sounding a little harsh to me. With stem mastering I can address that specific problem on the channel itself without affecting the other instruments.

Using Refinement by Brainworks I can reduce the harshness in a very organic way.

5:42 i’m boosting the lowest notes of the vocals here. Not only will this give the vocals bit more body but it will also have positive effect on how harsh they are perceived to be.

5:58 They still feel a little harsh so i’ll use a more surgical approach with a multi band compressor.

6:25 bypassing the plugin to make sure what i’m doing is positive.

7:00 vocals sitting nicely in the mix now.

7:10 To me the drums are sounding a little flat in the mix. I want to add punch and colour. I’ll start by using an expander to add some controlled punch.

8:00 You can see the purple band is only reacting to the kick and the blue band is only reacting to the snare. I adjust the threshold to make sure the yellow line always returns back to 0dB before the next hit.

8:51 The bypass shows me that the drums are now cutting through the mix better.

9:19 Now to add some flavour and clarity.

9:50 The before and after shows a pretty obvious improvement when soloed.

10:10 before and after sounding great whilst listening to the whole mix too.

11:04 adding some analogue warmth to the guitars.

11:43 switching between listening in solo and listening to whole mix is very helpful. Never spend too much time on either without checking the other.

12:26 Adding an eq boost adds overall volume so I’ll bring down the gain so I can test the changes fairly.

13:10 I feel like the bass has it’s place in the mix now.

13:27 I want the bass to duck out of the way of the kick here as they’re both competing for that low end space. I don’t want the whole bass to duck, just the clashing frequencies so I’ll use a Multi-band compressor.

14:32 Just listening to the bass by itself to make sure it doesn’t sound unnatural.

15:00 checking back in with LEVELS to make sure we’re still on track for a technically great master.

16:02 I’ll fire up REFERENCE now to make comparisons to the mix I received and the master in it’s current state, as well as comparing the master to other reference tracks.

The track align and level match features in REFERENCE help me compare the two version in the most objective way possible.

17:15 I’ll hit ‘level match all’ to make sure my references are also the same volume.

17:30 Looks like the master could use more prominent high mids and high frequencies.

19:32 I’ll add some tape emulation to the accompaniment stem to give it some warmth.

22:50 Making some adjustments to the mastering compressor here. As the signal comping in has changed.

23:00 Checking in with LEVELS again.

24:00 The before and after comparison using REFERENCE let’s me know that I changes I’ve made are positive.

25:00 Ok so what i’m gonna do here is take out some low end in the stereo field to focus the low frequencies in the centre. This should help give the mix a tighter sound.

26:50 I want to be peaking at -1dB true peak so I’ll reduce the output of my limiter.

27:20 LUFS sitting around -14 which is ideal for streaming.

28:40 I’m just adding a small amount of compression to the harsh frequencies of the brass here.

29:27 Sounding good and ready to be streamed! Thanks for watching, I hope you picked up a few ideas to implement in your next project.