Thursday, August 27, 2009

Resonant Frequency

This week's Electric Semiquaver discusses the effect of resonance on our compositions, both in and out of a MIDI set up.

One of the big issues that I tend to notice when listening to MIDI playback is that sampled notes sound "dead" to my ears. For example, while a single stroke on an acoustic piano is capable of generating a "glorious montage of harmonics echoing through space," a single stroke on a MIDI piano is, unfortunately, not nearly as satisfactory. True, while new sound generators have gotten awfully close to mimicking the sound of a real, honest piano - and convolution reverb has done wonders in making that same piano sound like it is in the largest of concert halls, to my ears even these new virtual pianos still lack a certain resonant quality that a real piano has.

I'll be the first to admit that our new virtual orchestras do a much better job in creating acoustic resonance than sound samplers from even as recent as five years ago. Back in the day, when all of my sounds were generated on an Alesis QS-6, I had to introduce a substantial amount of reverb simply so that I could remind myself that all of these generated sounds did occur in a space, and that my compositions needed to reflect that (no pun intended!). It was a crude, but effective trick, that ensured that as I composed I was conscious of the "space between the notes."

The problem that the computer composer needs to deal with, thus, isn't that the notes necessarily sound bad (because they don't), nor that they lack resonance (of a type), but that even with all of these advances in technology the sounds still don't adequately represent the "space between the notes." Our setting - the resonant space where the ensemble is supposed to be represented - simply doesn't sound "right." I touched briefly upon this issue back in my post "Baby Got Playback, (June 2009)" but I feel like this is a point worth revisiting , if nothing else but to press upon what problems occur as a result of this issue, and how we as composers can be conscious of it.

Without adequate representation of this resonance, we end up with digital silence in between our notes. Digital silence is an absolute absence of sound, something which is impossible in an acoustic setting, but occurs quite often in a computerized one. Normally, even in the quietist of rooms, faint hums, whirrs of fans, whispers, and other ambient sounds are present. Not in a computer.

The main problem with this digital silence that occurs is that the composer thus feels the need to "fill the void." Much like how most of us try to fill awkward silences with nonsensical conversation, the composer is compelled to try to fill up empty regions in the music with more and more "attack" points. That's right - not notes, but attacks. The difference is crucial. A note can be both long and short; however, the sustained long note often falls victim to the same problem that silences do: not enough resonance to fill the space. An attack on the other hand (as taken from MIDI terminology for ADSR, or attack, decay, sustain, release), is the point at which a NEW note is introduced. These attacks, often in the form of 8th or 16th notes, are created in between both types of resonant gaps - the ones created by silence, and the ones created by sustained pitches. The end result is that we have new notes introduced consistently without pause, often overwhelming the texture. Sometimes, this is a good thing. After all, I as much as anyone enjoy composing intense, pulsing 8th note rhythms that permeate the entire texture. However, we must be aware that we are doing this out of an aesthetic choice, rather than simply trying to "fill the gaps."

The other common problem that occurs, as I mentioned in my previous entry, is that the composer will often increase the tempo of the composition to help fill these gaps. While this may increase the excitement potential of the piece (and sound great in the computer), it often leads to very muddy and jumbled live performance, particularly when combined with a large concert hall. I personally stumble over this issue myself, and I have to constantly knock my tempos down to remind myself that they will sound fast enough on the concert stage.

Simply being aware that the space between the notes isn't accurate is the first step in learning how to deal with this issue. Experience with live performance helps too. Short of both of these, though, here are some other steps that a student composer can implement to help train the ear:

• Consciously add SPACE to your compositions. It is always better to err on the side of having a rest too long, then not long enough.

• Scan your pages for "white" notes. Even in sections of music that intentionally feature driving 8th and 16th note patterns, don't neglect sustained pitches and pedal tones. They will go far in holding your composition together, like glue.

• Work on composing slow music. Composing in MIDI is an ideal medium for fast compositions; slow compositions on the other hand often sound stilted and unsatisfactory in MIDI playback. Trust your own instincts.

• Remember that one note is often enough.

• As mentioned in the past, keep your tempos a notch below what you think sounds "fast enough." You'll find that your actual performance sounds more than fast enough.

As always, I'm eager to hear how those of you reading approach this issue, or if you really think its as much of a problem as I do!

On a different note: instead of using this blog as a forum for my new residency with the Heretic Opera, I will instead be contributing to the Heretic Opera's blog. I will post here one more time when that officially begins.

2 comments:

This is a great post Ken!! I remember hearing you mention your ideas about resonance once in passing and I thought "That's something I want to hear more about." I was even thinking about that just this week. "I wonder if this is what Ken was referring to." It's great to have the whole thing laid out. Thanks!

I can vouch that I've fallen into the trap of marking tempos too fast. I have had an embarassing number of performances crippled by bad tempo markings influenced by what sounds right in MIDI. Thanks for shedding some light on how to account for the difference between tempo in MIDI and tempo in a live performance.

About this Blog

Kenneth Froelich's Music

Who am I?

Kenneth currently lives in Fresno, CA with his wife Jennifer and daughter Katerina, where he is appointed as Assistant Professor in Music Composition at California State University, Fresno. His music has been performed by many world renowned performing ensembles, including the American Composers Orchestra, Duo46, Earplay, the Empyrean Ensemble, the California EAR Unit, the Jolles Duo, and the Indianapolis Symphony Orchestra, among others. Kenneth has received several national awards and recognitions for his compositions from ASCAP, the National Association of Composers/USA, Meet the Composer, the California Association of Professional Music Teachers (in conjunction with MTNA), the Percussive Arts Society, the New York Youth Symphony, the Society of Composers, Inc., IDEAS (Interactive Digital Environments Arts and Storytelling), and others. Works of his have been performed internationally in Germany, Italy, China, Chile, Argentina, Peru, and Finland. Kenneth’s percussion ensemble work “Accidental Migration” is available through C. Alan Publications. Additional music is available at JOMAR Press, or by contacting Kenneth directly at kfroelich@csufresno.edu.