Of the many various methods of recording, one has stayed, relatively speaking, under the radar: Binaural recording. But what is it? Binaural recording is basically the technique used to create very realistic, or 3-D, sound recordings. When these are listened to, extremely realistic playback is heard, and it’s truly like the listener is in the scene of the recording.

Binaural recording has its roots back in the late 1800’s in the form of a device called a théâtrophone.

These devices used telephone lines to send audio from operas and other such events to the listener. The two separate receivers created a rudimentary binaural recording. Of course, the quality was poor, but time would eventually change that.

The next eventual step found itself in the form of of Oscar. Oscar was a 1933 human like fake head created by AT&T. It debut at the Chicago World Fair. It had tow microphones for ears and was even closer to a modern day binaural recording. However, it wasn’t perfect either. So, what came after Oscar? A more sophisticated Oscar. However, AT&T didn’t craft the 1972 binaural head, but it was instead Nuemann, a company known for microphones. It was called the KU-80, and it’s the predecessor to the KU-100, the modern day binaural recording dummy head that goes for $8000.

So, how does it work? Acoustics and two microphones is the short answer. Now then. if you were to set up two plain old microphones on a room and create noise all around them, you’d find that the recording doesn’t sound like all it’s cracked up to be. Well, that’s because you’re doing it wrong. The reason the head is used is because of the way it interacts with sound, and this interaction is what makes the binaural recording so realistic. You may not realize this, but our head and ears greatly shape what we hear. There are two main components to this: head shadow and the pinna.

Imagine you’re looking down at an ocean shore, and let’s say you see a giant rock in the water. Now, as you watched waves propagate from the ocean, you’d see them bend around the rock, bounce of it, etc. This is what happens with head shadow. Our heads, the rock, cause sound, the waves, to move around and bounce of it. But the head also absorbs high frequency sound as well, and lower frequencies actually bend around it. Now, if the sound hits the side of your head, it will first enter your ear, but at the same time, some of it bends around you and enters your other ear. Our brains can sense the difference in time between sound entering one ear over another, and this is a crucial part in replicating a realistic recording: it helps dictate direction and location.

Now, if you take another look at the KU-100, you see that the head is actually pretty simple, but the ears are very detailed. Why? Those parts of our ears, called the pinna, drastically change what we’re hearing, and tell is where sound is coming from in another way. The first thing the pinna do is collect sound, making our hearing more efficient, but they do something else too. Take a look at this graph, called a Fletcher-Munson curve: (http://www.nonoise.org/library/animals/5.gif)

You’d think that we hear things all the same, right? Wrong. That is how we hear, and as you can see, for example, at the point where you can just barely hear the frequency of 4kHz, a 20 Hz sound has to be around 70dB louder, just to hear it! That’s a lot. In fact, the quietest level of sound most people hear on a daily basis is around 30-40dB. Our ears mold sound to be no where near what it actually is. But this isn’t bad, because, in short, it also creates comb filtering. Comb filtering are various notches in sound (typically caused by phase difference, or the time difference between 2 or more sounds). It kind of looks like this: (http://www.mathworks.com/matlabcentral/fx_files/35228/1/comb.jpg)

As you can see, this isn’t flat either.

Here’s what it sounds like:

(Fun Fact: if you have sound coming out to speakers, like usual, giving the right positioning, you can actually create this naturally. Try moving around in front of the speakers (not too close, though), and if you do it right, you’ll hear that flange, comb filtered sound being produced like in the video.)

So why is all this changing of sound important? Also, you’d think all this sounds like what we hear should be horrid. Well, in short, it’s all helpful for us (We’ve all grown up listening like this, so it sound natural). Our ears use this comb filtering effect to tell use if a sound is above or below us.

So, combining all this acoustic stuff, the head and ears give us our 3-D perception of sound. (Now, try snapping or clapping in front, behind, above, and below your head). That head used for a binaural recording does the same thing: it’s like a real human listening.

But as you may recall, it’s not all acoustics, because the second ingredient in these things are two microphones. It’s pretty easy to figure out where the microphones are located: it’s where our ear drums normally would be. This makes sure that the microphones pick up all the sound and all those acoustic effects. However, the microphones themselves need to be flat, because you don’t what them to boost or cut anything, as this could ruin the effect if it’s severe enough. The mics also need to be omnidirectional, meaning they record sound evenly in all directions.

To sum this up, put everything together, and you’ll get a binaural recording because the head is shaping the sound exactly like our heads do. This captures all the effects that make 3-D sound.

OK, great, you’ve recording something. How do you play it back? Well, the best way to hear a binaural recording is though some headphones or ear buds. If you attempt to listen to it through speakers, the illusion will be destroyed. This is because you’ll hear the sound from both speakers entering each ear. This is not how the sound needs to be played back. You see, with headphones, the left channel only enters the left ear, and the right channel only enters the right ear. Naturally enough, our right ear doesn’t hear what the left ear hears and vice versa, so when listening, the ears can’t hear both channels. Headphones do this (Theoretically, you could do this with speakers, so long at the right speaker only is heard by the right ear and the left speaker is only heard by the left ear, but that could be expensive and difficult. This is actually a thing called stereo dipole, a type of speaker placement, that removes crosstalk pretty well. Look it up for more info).

Finally, here are some examples of binaural recording.

The first one is a live music performance. First, it’s in mono for about a minute for comparison purposes, then it goes binaural. Second, I would recommend closing your eyes and picturing the image it creates, and don’t forget to where some headphones:

Here’s a virtual haircut using binaural recording. Close your eyes:

Try this too:

Beautiful, yes? I’m sure some of you are asking this: Can I try this at home? Well, technically, yes, if you have the money. Binaural recording doesn’t need some special, magic environment to work, so it’ll work anywhere. But if you want to use the KU-100 for the best sound, $8000 will be needed, but I’m going to assume you don’t have that kind of money for a fake head. Fear not, for there are alternatives.

The second option is using microphone ear buds, like the Roland CS10EM. These are much more affordable. They are ear buds that you stick in your ears and have playback, but they also each have a mic inside. Do they work as well as the dummy head? No, as the acoustical properties the head and ears make are partially removed, because the mics on the outside of the ear piece.

I’m also sure that the Nuemann head and other expensive similar products can be rented.

But what do I think about all this? It’s awesome, in short. I could easily see this thing being used for movies/video, virtual reality video games, and live performances. And seeing as you only need headphones or ear buds to make this work, the possibility of it breaking heavily into the market is high. However, when it comes to using this for most popular music that utilizes multitrack, it’s possible, but tricky. The head would need to remain in the same place for consistency, and adding any close mics, to drums for example, could ruin the image it creates. Modern production techniques will have to change somewhere for it to work, but who knows what the future will bring. Let’s see where it goes.

Gould in this version is speedy and energetic, and trills and ornaments abound. His fingers are light, giving the piece a flouting characteristic to it. It’s like the music is a warm ice that the listener blissfully slides on, where each note is sown right onto the next. In fact, each hand sounds like it’s almost robotically tied to the other, giving a perfect precision. It’s reminiscent of Baroque era music, feeling wise, that is (and because it is). You’d be right to say it sounds like the spirit of a young man played this. A very brilliant one at that.

Glenn Gould: 1981:

In comparison to his other performance, 26 years really changed things. Gould has a gentler, but more sensitive touch for the keyboard. His fingers are no longer just grazing along with jaw-dropping accuracy. However, he still plays amazingly. The fancy ornaments that decorated his music like a Christmas tree are now used sparely. His left hand also sounds like it’s gotten heavier and really drives into the keys in some parts. He’s slowed down as well. The notes flow, but they don’t ties so tightly into one another. He grew, and so did his style of playing, tamed by the years. Gould also sings more.

Joao Carlos Martins: 1981:

Martin most certainly performs this at a slower tempo, but it works. He’s also flirting with rubato though out the song. The effect is minor, yet it adds so much that Gould’s didn’t have. The keys are also being struck even heavier. However, he doesn’t stay slow and all rubato like. Things quickly pick up, and it sounds like Gould with more force behind each hit. I think that’s the biggest difference: he’s really beating the keyboard. This isn’t bad, but difference. In fact, funny enough, the dynamics remind me of that of a harpsichord, which is probably the instrument that would have been played during Bach’s time instead of the piano (because the piano didn’t exist at the time).

Peter Serkin: 1965:

This one sounds like a mix between Martin’s and Gould’s playing. Serkin is fairly slow, but uses more ornaments than Martins. There’s also less rubato. He strikes the keys gently, but with enough force if need be. It sounds very controlled, yet loose enough to allow the piece to not loose it’s light charm. This is also heard (or not so much) in his left hand’s soft touch, much like Gould: it doesn’t command a presence most of the time. However, it does come up and play right alongside with the right hand. Images of a mother playing this to her child congregates in my mind’s eye.

But what’s my favorite of the bunch? I would have to go with Gould’s 1981 rendition. It just sounds so peaceful and relaxed, not like his blissfully lackadaisical 1955 performance. I’m also not a huge fan of the light natured feel of Baroque music, and this one sounds like it was interpreted more so by a Romanic era composer. His light singing does, for me, add something that I didn’t think I would like. But, all in all, words of a man playing what brings him a serene happiness is what this song speaks when I listen, and that resonates with me.

The first thing I notice is that Boulez’s interpretation is overall a bit faster and more energetic. Solti’s fairs somewhat more slow, but larger in feel. Boulez’s is also a half step higher (at least for the first part). Solti’s version also has the intensity of the piece enter sooner with the chugging cellos, but when the madness ascends in Boulez’s piece, the chugging is more in the upper range. Another thing to note is that Solti’s doesn’t quite sound as harsh and in-your-face as Boulez’s, which sounds like he has the strings more piercing and ready to attack. The horns share this characteristic. The huge drums on Solti’s version sound lower, while Boulez’s sound more in the mix of things. When all said and done, though, both pieces sound similar, which is to be expected, as they are they’re same song.

As for me, I’m not sure which one I prefer. It could be because of the medium from which I’m listening (headphone. I know, I’m sorry), but Boulez’s version just sounds to shrill. Solti, on the other hand, conducted this piece more to the style, I think: it’s darker, heavier, and sounds more epic. But who knows, really. Igor Stravinsky’s dead.

When it comes to recording Jazz, Rudy Van Gelder was at the top of the game. Born on Nov 2, 1924, this New Jerseyan would eventually engineer hundreds and hundreds of Jazz under his name, more than anyone in history.

Van Gelder’s interest in audio and music stretches back pretty far. At the age of 12, he ordered himself a “…home-recording device that came with a turntable and blank discs.” During high school, he was a trumpet player in the school band, and soon began operating a ham radio.

Yet, as fascinated with audio as he was, Van Gelder actually didn’t go straight into the world of recording, but instead attended the Pennsylvania College of Optometry. After graduating, though, he found himself back inside a radio station, and from there on out, he knew what his calling was.

With finance from his work as an optometrist, Van Gelder started out recording local artists onto aluminum lacquer-coated discs that were copied to 78 rpms. He also used Neumann condenser microphones, which would turned out to be eventually extremely popular.

As for where Van Gelder recording everything, he started out in his parent’s home. More specifically, he recorded in a studio, complete with a control and live room, he and his father built with great enthusiasm from both parties. This is where he recorded many 78 rpms before marrying his wife, Elva, and moving onto bigger projects.

Once moving and hiting the 1950’s, Van Gelder discovered magnetic recording tape, and was one of the first to jump on it. Instead of just one and only one take of an artist’s work, this new technology allowed him to rerecord takes, overdub more parts, and cut/edit effectively. He says that “With tape, I was able to move closer to my vision.” With that Van Gelder eventually recorded Gil Mellé ‘s New Faces, New Sounds in 1953, his recording career began to take off.

The main studio Van Gelder did all his engineering in was located in Englewood Flidds, New Jersey.

Analysis: Lee Morgan’s The Sidewinder :

The Sidewinder was engineered by Rudy Van Gelder in 1963. It wasn’t too long after did it become quite popular and helped establish the soul-jazz genre.

First thing I noticed is that the main cymbal that’s keeping time is panned more toward the right, and it’s also the loudest part of the drum kit. The quietest part of the music is the upright bass, which sits in the back. There’s also a piano, which blends together with the snare in the background when a lead instrument is present. However, the piano does sprout to the front when it gets to lead. In fact, every instrument gets to lead and is put up front at some point thought the album. The two horns, though, when played are usually the leads and get put in front of everything else, each horn panned to one side of the stereo field. These themes are present everywhere in the album, and allows each song to flow into the next (when that eventually happens). This all gives a very complete feel to The Sidewinder. Also, if you listen closely, you can hear people talking/singing every once and a while.

Normally, this wouldn’t be my first choice of music: if I listen to jazz, a calmer, more mellow style is what I prefer. But I like it. If I catch myself in the mood for it, I would definitely listen to it.

Here’s the full album on YouTube. Not ideal, but at least is isn’t horrible:

My class recently went down to the University of Washington district to check out a interesting studio with an awesome sound exhibit created by a man named Steve Peters. The room itself is kind of hidden, but that almost adds to it. Upon entering, you’ll be greeted by eight pairs of candles, two or so in each set, position atop eight speakers, which are covered in a light cloth, that gift you with the only light in to the room: an orange, peaceful low level light. Four of these speakers are snug in the room’s corners, and the others places against the walls. Your seating will simply be a couple of benches.

That’s all you need.

Place yourself in the middle if possible, and listen. Listen to the birds, the bells, and the songs of the people, and try to avoid throwing your head over your shoulder when a whispers creeps out, tempting as it may be. Use the rich stereo image to vicariously interweave yourself into the world it creates. Become the audience for the voices, as they are there for you.

I really enjoyed this exhibit. At first, I was wondering how long my attention would last, but I was greatly surprised by the loss of time: when my eyes closed, it captured me, and I was immersed. The whispers in particular, and ironically, stood out the most. Being so quite, it was very difficult to tell that the sound was from speakers, and it genuinely seemed like people where there with me, only to disappear if I attempted to look. In fact, if I did open my eyes after many minutes had passed, only to see my fellow students sitting so still, as if unconscious, it really emphasized the power of the experience, and now I want to create something with 8 channels of surround. For those of you in that area, I would definitely try and get in for a listen. Here’s the website: http://www.jackstraw.org/

As cool as it was, he’s not the only one doing this. Trimpin is another local Seattle sound artist who creates some interesting projects. He was actually born in Germany, but managed to find his was to the United Sates in 1979. Also, According to his biography page on his website, www.trimpinmovie.com, he does not allow his recordings to be “commercially released” or to be represented by a gallery or dealer. He does, though, have some previews there: http://www.trimpinmovie.com/#/selectedworks/

In this post, I analyze Nine Inch Nails’s (abbreviated NIN) Head Like A Hole of their album Pretty Hate Machine (1989) [Within NIN’s “Halo” chronology, it lies within Halo 2]. The song was produced by Trent Reznor, the only true member and songwriter of NIN, and Flood. It was written in 1988, and the album was recorded in various studios, so I can pinpoint the exact location. In a 2002 interview with Reznor, he says that he used a Commodore 64 more all the MIDI and sequencing through a Mac Plus. It was by far, Pretty Hate Machine‘s most successful song climbing up the billboard charts at the time of its release.

The piece starts out with percussion, mostly panned right, with a staccato like distorted blurb popping in the center after a few seconds. A massive kick and snare soon follows, temporally dominating the mix, accompanied by “singing children”.

The verse introduces a multi-octave, square like bass line that repeats throughout the song, the kick/snare remain in the middle, but have been pulled back. Reznor’s vocals also pop in here, which contain some reverb and delay. The second half of the verse also brings back the melody the children were singing, but much lower note wise and in the mix.

The chorus quickly fires into action with raunchy, distorted guitars blazing through where the main bass once lived, which is now only a clean bass following the guitar. The vocals are also intensified with harsh screams backed down a bit in the mix, cutting through with the passion. The second part of the chorus (or post-chorus) has the vocals cleaner with multiple takes layered together. An atmosphere synth is also included, predominately in the right channel. Another low level envelope filtered synth glues it all together.

The second verse mirrors the first with the addition of some extra sounds bouncing left and right in the stereo field. The voice also sounds like it has a touch more reverb.

The second chorus and second post-chorus remain the same.

The first bridge of the song brings the return of a similar melody/sound from the first verse and the signature bassline, partnered with drums made up of a kick, snare, high hat of sorts, and various blurbs of percussion here and there. There’s also a strange vocal like synth panned somewhat to the right. Actual vocals pop back in half way through.

The final chorus repeats.

The outro is similar to the post chorus with an extra, harsh background vocal for the first part, and the last few seconds of the song drop the drums, leaving a wordless vocal melody, “hue-hue”s, and the sound of the children from the intro.

One of the qualities of this song that really stuck out to me was the mastering. The kick and particularly the snare are very punchy, and help establish dynamics not heard in modern day recordings. The choruses are also pumped up in volume more so than current music. This is easily seen if you take a look at the waveform (shown in ProTools 11):

The clear spikes indicate varying levels of loudness. If mastered nowadays, this would be nearly a straight, flat rectangle.

Humans first started recording sound over a hundred years ago with using a technique called acoustical recording. The earliest recorders, utilizing this method, worked with three basic components: a diaphragm, needle, and recording medium. When put together, the diaphragm moved in accordance to the sound wave, the needled moved with the diaphragm, and the sharp point traced a pattern on to paper, cylinders, etc. This pattern “held” the sound information. I say “held” because the quality was, of course, horrid, but it was a major first step in a new technology.

Thomas Edison and a man named Edouard-Leon Scott de Martinville were some one the most important individuals to recording’s birth. While Edison is widely recognized as pioneering sound recording’s major breakthrough, Edouard actually beat him to it a little over a decade with his phonautograph.

Here is one of, if not, the earliest know recordings by Edouard-Leon Scoot, 1860, where someone sings “Au Clair de la Lune”:

(Alexander Graham Bell, known for the telephone, actually created a phonautogrpah as well shortly after Edouard, but his was a bit gruesome: it was essentially the same, but use an actual human ear and part of a skull. Yep.)

There were a few devices popping up to replace the phonautograph, but Thomas Edison’s phonograph by far took the cake, which was patented on December 24, 1877. This device not only was able to record, but had a separate needle for playback (playback, simply enough, work in the complete opposite order recording did). Edison’s phonograph used metal cylinders to record the acoustic patterns onto, however, the recording time was about 2 minutes with an unfortunate short amount of playback time before degrading in quality. The hi-tech decedents of phonographs are still used today.

Era of Tape –

While phonographs worked, a new recording medium was much needed. Enter magnetic tape. Tape as we know it was first invented in Germany by Fritz Pfleumer in the ’20s. His tape was a paper of sorts with magnetic iron particles across the surface, which, when recorded to, aligned with the audio signal. This new technology allowed for easy edits, but the sound was still crap. Fortunately, the process of AC tape biasing was accidentally discovered (this is achieved by recording a very high, inaudible frequency over the rest of the recording), and the audio suddenly and massively improved.

Here a video going more into this:

This changed everything. For nearly half a century, this tape (a further improved, of course, but basically the same, nonetheless) became the number one medium to record to. The Beatles, Led Zepplin, and some of the largest bands to ever exist used tape. Even today, if one can afford to do so, tape is still considered a high quality recording medium, yet, it has its limitations, as physical devices do. A new technology was needed.

Modern –

While immensely successful, producers and musicians wanted more than what tape could offer: the limited typical 24 possible tracks, tedious editing, and other limitations drove us into a new direction. That new format being digital, and those magic “1”s and “0”s. While a few digital techniques, like digital tape from the 80’s, existed, the computer was the major player. Before, all the sound was recorded, mixed, and distributed in analog, meaning it never was digitized, but computers offered digitization of sound and, more importantly, incredible and rapidly growing computing power. Nowadays, one can do virtually anything to sound with a few clicks of a mouse that would otherwise be near if not completely impossible in analog, and tape has been replaced by hard drives. This revolution also brought electronic music to its full potential.

So, while this seems like a gift from above, the same problem that plagued the earliest of recordings reared its ugly head once more: quality. Now, digitally recorded audio quality may be many magnitudes better than what a phonautograph recorded, it takes a lot of work and power to get digital to sound not digital, if that makes sense. The biggest worry is that digital cuts up the audio into thousands of samples a second, and that there’s a limit on the highest of frequencies it can record. Being so complex, things get really confusing really fast, and can sound bad if done incorrectly.

Here’s a video explaining more of the quality aspects behind digital audio:

I grew up on digital audio, right around the time when tape was starting to be replaced. Personally, I’m fine with what I have available: it works just fine and so much more. However, I would still like to experiment with tape, but the whole process is way too expensive, so I’ll just take what opportunities I can catch.

All in all, there are many ways to record sound, and things will continue to improve and change. All we can do is move along with it.