Use an Aux to send all the desired channels to an internal bus. Create an Aux track and set the bus as its input. Put the reverb plug-in on the Aux track.

Benefits:

more efficient use of computer processing

set level of dry sound independently and then add reverb

easier to control level of reverb on a large fader – don’t need to open any plugs-ins to make changes

since several tracks are mixed before going through the reverb, they sound like they are in the same ‘room’.

In general, use no more than 4 reverb plugins per mix. These could be a short bright plate, medium room, longer hall, and a special effect reverb.

This will help keep your mix from getting muddy and unclear.

]]>http://jbfaudio.com/blog/?feed=rss2&p=1080Automationhttp://jbfaudio.com/blog/?p=106
http://jbfaudio.com/blog/?p=106#commentsSat, 11 Jan 2014 17:09:24 +0000http://jbfaudio.com/blog/?p=106Continue reading →]]>Static mixes – where the faders sit at the same level throughout a song – will never sound polished or finished. The balances between song elements must be constantly adjusted to focus the listener.

Lead vocals are the place where micro adjustments came make a huge difference between a so-so mix and one where the voice sits right ‘in the pocket’ – never lost, but never out of the texture.

Ride those faders!

]]>http://jbfaudio.com/blog/?feed=rss2&p=1060Using two main arrayshttp://jbfaudio.com/blog/?p=101
http://jbfaudio.com/blog/?p=101#commentsSat, 11 Jan 2014 16:52:45 +0000http://jbfaudio.com/blog/?p=101Continue reading →]]>“I like some aspects of one array and different aspects of another. Can I combine two different main arrays?”

Don’t mix main arrays
In general, when using classical techniques (main array with spots), don’t combine main arrays – this causes comb filtering/phase issues because of the difference in time of arrivals to the two arrays.

If you want to combine, say, the warmth and spaciousness of an omni-based spaced AB array with the crispness of a cardioid-based XY, you need to consider these a 4 mic array and place the mics with that in mind.

(In the case of the 4 mic array, the cardioids in the XY would be at a similar distance or even closer than the AB, rather than 1.7 times farther away, as distance factor might indicate.)

]]>http://jbfaudio.com/blog/?feed=rss2&p=1010‘Reflections’ on horn recordinghttp://jbfaudio.com/blog/?p=77
http://jbfaudio.com/blog/?p=77#commentsFri, 02 Nov 2012 02:05:11 +0000http://jbfaudio.com/blog/?p=77Continue reading →]]>Bear with me – the pun in the title will become clear as you read this.

I have been working sporadically over the past few months on a recording with a trombonist. The project consists of works for duos – trombone and ‘something else’. Each pair recorded in the same hall, but months apart. The hall is a small recital hall ideally suited for solo and chamber classical music, so it was a good fit. Up to this point, the other member of the duos have included marimba, trumpet, and soprano voice. Each of these has unique recording challenges, but they all have similar projection – generally forward.

One of the common challenges on a project like this one is to insure that all the separate sessions come together to make a cohesive sounding final product. In the hopes of having a consistent sound, I use the same main array (spaced pair of DPA 4006 omni condensers) for each session. The positioning was similar, but individually adjusted to optimize the sound for each duo.

Enter the horn. The French horn (though there is nothing French about it). This strange and wonderful beast has evolved from hunting and ceremonial horns made from – well, horns. Animal horns. One of the unique features of the horn is its projection. The bell of the horn points to the rear, on the player’s right side. In a typical concert hall, the horn player’s sound bounces off the back wall and reflects to the audience. That is how all of us (composers, players, engineers, and audience) are used to hearing a horn – never direct sound, always reflected.

Back to the recording at hand. I prepped the stage in anticipation of the musicians’ arrival. The trombonist and I had discussed the stage layout briefly via email before the session, so I had a general idea of where they should be. Trombone on the right (stage left) where he had been in all the previous sessions. Horn on the left (stage right). Both of them facing the center of the audience area. I setup the stage so that the performers would be far upstage – this would provide enough reflective wall behind the horn.

However, when the duo arrived, it was clear they had a different arrangement in mind. They had been practicing sideways, nearly facing each other on the stage. In this position, the bell of the horn did not aim at the back wall. Nor was there any side wall to speak of.

First judgment call – should I ask the players to rearrange to work with my plan, or work with what they were used to? I knew that making them conform to my setup would throw off their communication and potentially affect their performance.

Music comes first.

I’d rather have a less than stellar recording of a great musical performance than the other way around, so comfortable, communicating musicians are of prime importance.

Of course, I still need to provide a stellar recording, so now I had a new concern – would the horn sound appropriate?

As they made a quick test recording my worst fears were realized. The trombone both sounded great and matched the sound of the pieces it was going to coupled with. The horn, however, was another story. Much as I had worried, the horn sounded distant and diffuse. Without any surface behind it to reflect the sound, the horn might as well have been in another room.

1. leave the setup as is, and rely on a horn spot mic to balance the sound.

I hated to do this, as I knew this had little hope of matching all the other selections on the project. The spot mic would have to be behind the horn and would pickup an unnatural sound.

2. try a different arrangement of the musicians on stage

This option was no better. I’d already considered and rejected that option. To begin rearranging at this point could quickly send the session down an unproductive path.

A 3rd option came to mind. What I needed was a wall or other reflective surface behind the horn. I thought about rolling the grand piano behind the horn and raising the lid, but a piano sits too high off the floor. Then it came to me…a drum shield. I brought in a 5 panel plexiglass drum shield and placed it behind the horn. Time for another test recording.

Now that the musicians and main array were set, I placed the spot mics. These were two Royer R-122 active ribbons. The trombone spot went in front and to the side, with it’s null aimed at the horn. The horn spot sat next to the horn, aimed at the reflector. It’s null aimed at the horn, so as not to pickup the direct horn sound.

]]>http://jbfaudio.com/blog/?feed=rss2&p=770All is not lost…http://jbfaudio.com/blog/?p=15
http://jbfaudio.com/blog/?p=15#commentsSun, 08 Jul 2012 00:57:55 +0000http://jbfaudio.com/wp/?p=15Continue reading →]]>Every once in a while it happens. (More often if you would believe Mr. Murphy)

A collection of mistakes, failures, and oversights that conspire to destroy the only recording of an event.

Here’s the story…

I sent a student engineer across town to record one of our school orchestras performing their spring concert. Simple stereo archive recording of a live event. Near-coincident pair above the conductor to a flash-based portable recorder (and a backup). What could possibly go wrong?

Well, everything.

First of all, the backup. While I’d love to have redundancy for the entire recording chain, in reality that is pretty difficult to do. So we do what we can. If the power goes out, the musicians can’t see, so the concert will probably stop right along with the equipment. If one of the mics, cables, or preamps goes bad, I can always use the one working side and create a pseudo-stereo recording with post-processing. (More on that in another post!) The one place I always insist on redundancy is in the recorder. Maybe this comes from my pre-digital roots, where tape was the weakest link. Maybe it comes from the fact that we have lost confidence monitoring in modern digital recorders. Or from my experience with various digital tape formats and the limitations of error correction. Or maybe from what I know of human nature and how easy it is to forget to hit record. Regardless of the reason, this is where redundancy is required, even if it is a stereo backup of a multitrack recording.

But not this day. My team happened to be recording 9 events at 5 different locations, so equipment was running low. This meant I couldn’t send a backup recorder. “Oh well,” I thought, “I haven’t had any reliability problems with this recorder since I purchased it, so it will be fine.”

Lesson one: always run a backup.

So the student sets up, gets levels, and begins recording several minutes before the concert starts. Everything is going smoothly until – in the middle of a performance – someone walks by and kicks out the power to the recorder. The student quickly replugged the power, but had to reboot the machine and restart recording. The phantom power (being supplied by the recorder) also had to return to powering the mics. All-in-all, about 9 seconds of the concert were missed.

All this could have been avoided if I had followed another rule of mine – keep charged batteries in the portable recorder. Battery powered recorders (and buss powered audio interfaces) add another layer of redundancy, since they will keep on going even if the musicians can’t see. But alas, I had failed to charge the batteries ahead of time.

Lesson two: always charge the batteries.

Lesson three: make sure all cables are trip proof.

I was certainly not pleased when I heard what had happened, but I prefer to look forward into solutions than backward into blame. Unfortunately, I was in for one more surprise…

A corrupted audio file.

When the power was pulled from the flash recorder, it was in the middle of writing to the SD card storage and never had a chance to close the audio files. No audio workstation or general audio software could recognize or open the file from the first part of the concert. 9 missing seconds is one thing – over 30 minutes is quite another!

Audacity to the rescue.

As I was beating my head against a wall trying to get something to open the file, I remembered a feature of Audacity – raw data import. Besides being able to import audio or MIDI data, open-source Audacity can look at and import the raw data from any file.

Since it only looks at the raw data (and not the header), Audacity has no idea what the data is supposed to be. In the case of an audio file, these are things like:

encoding format

mono or interleaved

sample rate

bit depth (samples per bit)

byte-order

When trying this recovery process, it is helpful to know as many of these things as possible. Standard pro audio formats (such as WAV, SDII, AIFF) will be PCM, either 16 or 24 bit. Some systems may use floating point, either 32 or 64. Many of the other options refer to telecommunications formats.

Byte order will typically be little-endian, but for the overly curious and computer nerds in the audience, more info can be found here.

I knew the file I was trying to resurrect was a 16 bit, 44.1kHz WAV file. Data is organized into 8-bit bytes, but Audacity does not know where to begin with a header less, corrupt audio file. Since audio typically uses longer word lengths (16 or 24 bit), there are 2 or 3 ways the audio sample can fit into 2 or 3 bytes. This is the start offset setting in the import window and finding the correct setting is unfortunately a trial and error process for each corrupted audio file. The start offset will be either 0 or 1 for 16 bit files and 0, 1, or 2 for 24 bit files.

Try each setting starting with 0. The resulting imported audio will either sound like distorted garbage, or…

…it will be the audio you thought was lost forever!

Once you find the correct settings, simply save the audio as a new wave file (or whatever format you need).

Big problem solved – the audio from the first half of the concert was back. But what to do about the missing 9 seconds? We got lucky, as they were part of a musical repeat. The missing seconds were cloned into place. Admittedly not a true archive of the event, all-in-all a satisfactory outcome.

Recording Sessions:

The University of South Carolina Wind Ensemble, under the direction of Dr. Scott Weiss, utilized Fall break (October 20 – 23, 2011) to record a program of Leonard Bernstein compositions for a CD to be released on Naxos classical music label. The ensemble of nearly 70 students performed on the Koger Center stage during two three-hour sessions each day.

A temporary control room was setup in the Koger Center green room. The ProTools HD system and Tascam DM-4800 digital console were moved from USC School of Music Studio C. The engineer (Jeff Francis) and Wind Ensemble graduate assistants monitoring via Genelec 8020s while the producer (Paul Popiel) listened on headphones. In addition to talkback, snoop, and private telephone audio communications, a video camera and monitor allowed those in the control room to watch the conductor onstage.

Though the overall sound was primarily captured by a stereo pair of main microphones, for the sessions a total of 26 microphones were used. These included the main pair, flanking and ambient mics and 20 spot mics on the various sections and individual instruments of the ensemble. A total of 487 takes were recorded – creating over 12,000 sound files totaling nearly 130GBs!

Post-production:

Scott Weiss, working from rough mixes of the takes and notes taken during the recording sessions, choose the takes to be used. Selections were marked directly on the score. After the initial round of edits, a handful of additional corrections were made and the mix was adjusted to bring out certain musical phrases and solos.

The CD will be released on the Naxos classical music label later this year.

Commissioned by the South Carolina Philharmonic, John Fitz Rogers’ Double Concerto was composed for the piano duo of Marina Lomazov and Joseph Rackers.

Recording a double piano concerto is a difficult task with many unique challenges. Since the pianos nestle together, there is very little physical room to place microphone stands. Also, the outer piano has the lid removed, so it projects much differently from the inner piano.

Trinity Episcopal Cathedral, Columbia, SC, is currently undergoing a bicentennial
restoration project. Their sanctuary has been closed for the past 2 years for
renovations. The Trintiy Cathedral Friends of Music commission composer John
Fitz Rogers to write a new work to commemorate the grand re-opening.