This time we’re going to be talking about time! Time is one of the most mysterious forces in all of existence, and the same is kind of true with the world of audio engineering. Time has some very interesting properties that play into how we hear things. What we’re going to be talking about specifically is very small increments of time, or what’s called the Haas Zone or Haas Effect, which is where time no longer becomes distinguishable as time, and instead we get some different stuff.

So, in real life, when something is standing directly in front of you, you hear the sound get to both of your ears at the same time. If something starts moving off in one direction, then the sounds that it emits gets to one ear before it gets to the other ear. So, if something is directly on your left, then it’s going to get to your left ear maybe about a millisecond before it gets to your right ear.

So, time differences between about zero and one millisecond start showing up in our brain as localization. Panning. Now, we’re going to discuss why we use level panning in the DAW as opposed to precedence panning, which is what that’s called. But, precedence panning is something we should be aware of, because it can be a very cool use of a tool.

Now, the other thing that happens with small increments in time, once we get out of that precedence zone, we start getting into the phase interference zone, where we don’t necessarily localize the sound, but we don’t hear the sounds as separate entities either, and instead, we get kind of a modal interference that our brain hears as like, a flanging, or a robotic, or a comb filtering kind of effect.

When these are panned apart, we hear it as a stereo source. We still hear it as one thing, but it’s like it’s coming from two different directions, and our ear will favor the one that comes first, but we still hear it as stereo, and when summed together, that’s when we start getting our comb filtering and modal interactions.

So, I’m going to demonstrate all of this stuff now. I’ve taken the liberty of recording my beautiful singing voice, and I’ve copied it onto two tracks. One being panned hard left, one being panned hard right. They’re at the same level. They’re going to play at the same time. Ready? Brace yourself.

[voice plays]

Beautiful. I want you to know, I gave up a very, very successful career as a singer to become an audio engineer.

That’s not true.

Anyway, here now, I’m going to put on a delay, and I’m going to put on a delay of – let’s do .75 milliseconds and see what happens.

[voice]

One more time. Oh, it sounds like it’s coming from the left! Nice.

Alright, now let’s see what happens if we do a wider delay, of say, like, 20 milliseconds.

ADVERTISEMENT

[voice]

It’s kind of coming from both directions, in a way, although again, we’re favoring the left because it’s coming first.

So, okay, that’s kind of cool, right? These can be very useful. Sometimes you’ll get a source that’s mono, and you just need to brute force it into stereo using something like a 20-40 millisecond delay can be very good for that, and occasionally, you might want to do kind of a precedence panning effect on something like, say, background vocals, or really anything in the background.

But here’s why you might want to be a little bit reserved about using these kinds of effects. When we sum them into mono, they don’t play so well together. So here they are, here’s my voice together.

[voice]

Right? It’s mono. Again, it’s just louder than it was before.

Now, here we are with our .75 millisecond delay.

[voice in mono with delay]

Sounds comb filtery and weird. It doesn’t sum to mono so nicely.

And here’s with our 20 millisecond delay.

[voice in mono with 20 ms delay]

Sounds like metal and kind of fried and stuff. A little weird.

I mean, these things can be cool as effects in and of themselves. This sort of weird, phase, modal interaction, but generally speaking, it’s not desirable.

However, if we can get our timings right so that the mono fold is okay, or if we’re working on something where we just really don’t care how it sounds in mono, which there’s a lot of things that can be discussed about that. That’s not really a conversation for this video, but if we deem it appropriate, this can be a cool sort of effect to play around with. Alright, guys. Until next time.

Matthew Weiss

Matthew Weiss is the recordist and mixer for multi-platinum artist Akon, and boasts a Grammy nomination for Jazz & Spellemann Award for Best Rock album. Matthew has mixed for a host of star musicians including Akon, SisQo, Ozuna, Sonny Digital, Uri Caine, Dizzee Rascal, Arrested Development and 9th Wonder. Get in touch: Weiss-Sound.com