JUMP CUTA
REVIEW OF CONTEMPORARY MEDIA

In teaching beginning and advanced level media production, I have taken on the habit of teaching sound production first, followed by instruction on image making. This audio first approach began out of necessity: my department had limited video production equipment and a pressing need for more undergraduate production classes. With a handful of analog tape recorders and several outdated Pro Tools editing systems, I developed “Sound Production and Manipulation,” an advanced level methods course which required students to produce narrative, experimental, and documentary audio projects without picture. From the beginning, students produced exciting and innovative projects and engaged well with a methodology and theory set that was new to them. I’ve since changed my pedagogical approach and now begin video production courses with audio instruction at all levels, having applied this methodology to an introductory video production course (120 students in six lab sections), a graduate course in media methods, and a high school video program for girls (Chesler 2005). [open bibliography in new window]

With the audio first method, students create projects with at least two tracks of sound and without picture. These projects are less expensive to produce, less clunky in terms of pacing and technique, and generally avoid clichés and narrative crutches typical of elementary production student work. Audio-only projects require that students consider the vast potential of sound recording in terms of method and product. Further, the audio first classroom can function with limited, rudimentary technical equipment or with advanced sound editing software and recording studios. Students can record on cassettes in their closets or in Foley Rooms directly into digital mixing sessions. Elementary makers "mix" on iMovie, while advanced students (those with some non-linear editing experience) work in Pro Tools. When students advance to image making after a sound first approach (though many stay committed to working solely with sound) they pay much more attention to their mix and to the relationship between sound and image, they employ multiple tracks of sound, and avoid the crutch of music. I wish to advocate for a sound first approach and share some of the practical and theoretical techniques for the audio first classroom.

Theoretical underpinnings

Sound only projects are not taught as radio production per se (though many students will lean toward the This American Life format.) As a documentary and narrative filmmaker, I rely on film theory and film production methods as the basis for the audio first approach and texts for the course are drawn from film theory literature. Rick Altman’s chapter “Material Heterogeneity of Recorded Sound” in Sound Theory/Sound Practice serves as the primary source. In his overview of the "phenomenon called sound," Altman evidences the complexity, heterogeneity and multiplicity of sound as event (16). Beginning media makers engage with "sound as event" by considering the distinctiveness of the location where a sound will be recorded as well as the distinctiveness of the location of reception. They must consider the type of microphone and its impact on the recording, the softness or hardness of the materials in their recording space, as well as their own ears and subjective listening during the reception stage. Altman’s terms “attack, sustain and decay,” “sound envelope” and “spatial signature” circle back throughout the quarter and provide for a shared language that allows students to better understand sound and speak of the sounds that they imagine.

Michel Chion’s "Three Listening Modes" in Audio Vision provides another linguistic tool for describing the complexity of audition. In critique sessions, discuss when and where students use “causal listening” (listening for a sound’s source), “semantic listening” (finding literal meaning in a sound) and “reduced listening” (recognizing the quality, content or timbre of a sound.) Even at the production stage, ask students to use and consider these modes. With the listening experience in mind, some students will prepare in pre-production to engage with the audience’s auditory choices and auditory process. For example, a producer might create a work that suggests one sound source, but then reveal through the piece that this sound comes from another, unexpected, source. When students move beyond listening for a sound’s source to consideration of what a sound evokes (reduced listening) they grasp the potential of sound production and design.

Foley artistry in and of itself, relies on the fact that auditors will “incorrectly” read a sound in the causal listening mode, avoid the semantic listening mode and engage with reduced listening. Practically, Foley artistry instruction figures prominently in an “audio-first” course or approach. The website http://www.filmsound.org contains many articles and pieces in film sound artistry, and on Foley technique and theory specifically. Stanley R. Alten, in Audio in Media, presents a useful list of techniques for manually producing sound effects: i.e. footsteps in snow = manipulation of cornstarch, arrow flying through the air = whipping a willow branch (443-447). Theoretically, Foley technique and Foley reception connects Chion’s listening modes with a discussion of semiotic conceptions of signifier and signified. In Foley, objects are manipulated to create sounds which represent a signifier. The sound in and of itself is a signifier. These objects that produced the sound are not the signified however. Typically, as mentioned above, they are objects used in place of the signified. Foley work succeeds when the pro-sonic event (akin to Metz’ pro-filmic event) is disavowed by reduced listening.

Now, consider Charles Sanders Peirce’s classification system of signs with an ear toward sound: iconic signs (sound recordings), indexical signs (the dynamic relationship between the sound and the object that made it) and symbolic signs (words/language used to describe an object or sound.)[1] [open notes in new window] These ideas inspired Foley Tour, by Justin Gardner. Gardner comedically produces a fictional Foley studio wherein employees eschew substitute objects (i.e. the cornstarch method or iconic methodology of sign production) for the real source (rooms filled with snow, or indexical sign production) as they record extraordinary sounds (dinosaurs and bombs!) Hilarity ensues as we hear a sound recordist attempting to position the lavaliere microphone on the dinosaur itself. Further, the tour guides advise visitors to ‘step behind the line’ before the nuclear explosion is recorded. Play with signfier/signified relationships thrives in Optic Nerve Radio Hour, a project by graduate student Ryan Ellis. In this piece, Ellis ironically employs the sign system of 1940’s radio plays. The scratchy record player, dramatic music, and commercial interludes suggesting zeal, intensity and passion ironically frame Ellis’ re-creation of the pages of Optic Nerve, a present day comic book that celebrates the mundane.

Other graduate work has directly engaged popular ideas in sound theory. David Benin, produced Ramona Quimby’s Partial Birth Abortion, age 14 [play sound file] to challenge the "first sense" approach to sound theory. Walter Murch, among others, writes of sound as the first sense, experienced in the womb:

“Throughout the second and four-and-a-half months, Sound rules as solitary Queen of our senses: the close and liquid world of uterine darkness makes Sight and Smell impossible, Taste monochromatic, and Touch a dim and generalized hint of what is to come.” (Murch 1994:i).

Transom.org presently has a downloadable piece produced by Murch called "Womb Tone." Benin considers the ways in which sound theorists who speak of what the human hears "in utero" ignore the politically charged nature of these statements in relationship to the abortion debate. This "sound as first sense" trope creates personhood out of a constructed and imagined auditory position. Benin’s piece reveals the absurdity of rendering a fetus’ subjectivity qua personhood sonically as he harshly "re-creates" the sound of an abortion from the fetus’ perspective. The sucking sound of the abortion engages semiotic indexicality: it both refers to the sound made by a vacuum, and to the sound as heard by a woman getting the abortion (or to the imagined abortion practitioner.)

This project captures one interesting theoretical bent to the sound classroom - corporeality and gender. The body and sound figure in writing by Mary Anne Doane, Kaja Silverman, Allen Weiss, Douglas Kahn, Sarah Kozloff, Amy Lawrence, Britta Sjogren, John Corbett and Terri Kapsalis and texts by these authors make their way into the classroom, particularly at the advanced level. We discuss the interaction of sound and the body phenomenologically and consider the construction of the body through audio. In entry-level projects, gender and voice becomes a focus as students consider how to mark a character as male or female. They must decide if their speaker/narrator should have an accent and speak colloquially. Then, these choices must be justified: why must this character be gendered and what are the racial, ethnic, class, age and disability considerations when choosing a specific narrator or character? Structuring readings around one specific element of interrogation unique to sound art enlivens the discussion on a term long basis. The body/gender is but one topic among many potential subjects which may be highlighted.

Genre and form: moving through
narrative, experimental and documentary

Students are able to produce three short sound projects within the typical ten-week quarter. Project guidelines and strict requirements on technique and theme are necessary when introducing students to audio production. I ask students to produce a fictional work, something in documentary, and an experimental audio project. Other sound projects for beginning students can include asking them to build a sound FX library or to replace the sound in a piece of visual media (strip the sound from a film clip and ask them to Foley sounds.) Narrative works generally lean toward traditional radio plays and audio theatre. Experimental pieces include sound installation and musique concrete. Documentary pieces are varied in form, including the typical interview and narrator, but can be observational, poetic and reflexive.

A popular fiction project, “Technical Interruption” requires students to create a place, put characters (human or otherwise) in that space, and interrupt their activities with some sort of technical device or happening. In fulfillment of this premise, students have created movie theatres wherein cell phones ring, dreams interrupted by alarm clocks, and boring evenings at home enlivened by surprise emails. An ambitious project, Alien Visit, depicts an alien who happens upon a home and plays with its contents. This group of undergraduate students recorded each of these sounds on their own (no library effects were used.) The sound of a spaceship landing and then taking off, for example, began by blowing through a straw into a coffee cup full of water.

Experimental techniques and form are best realized through sound installation. Though I encourage students to create installations in unique spaces, students often use an existing installation opportunity because of limited turn around time. With the support of the Stuart Collection at the University of California, San Diego, students may produce a project for exhibition in Terry Allen’s Trees (http://www.stuartcollection.com). Trees includes three metal tree sculptures situated in and around a eucalyptus grove. Two of these trees contain speakers connected to a CD player. One channel on a stereo CD goes to the "talking tree" and one channel goes to the "music tree." The third tree stands silently in front of the main library. In the pre-production stage, students visit the site at different times of day and consider the limited frequency range and volume on these speakers.

One student, worked with musique concrete and carefully considered the potential of the site (Trees lies within a major corridor on campus.) Toshiro Inugai’s piece on the ubiquity of cell phones and the inanity of cell phone conversations, loops ring tones and repeats his voice. [play sound file] It playfully disrupts the experiences of passersby who may hear the piece and instinctively reach for their phone, only to have a conversation akin to his presentation.

As our program has a documentary emphasis, for their final sound projects, advanced students must work within the non-fiction genre. Their sound pieces consider Bill Nichols’ modes of documentary and have been poetic (a collection of quotes by women artists), participatory (excerpts of conversations with many different people throughout California who discuss their relationships to food), expository (narration guides us through the horrific treatment of chickens at KFC), observational (the rumblings and clanks of a motorcycle shop), and performative (subjective experience at the dentist’s office wherein buzzing saws and drills recorded at a hardware store come to stand for medical devices.)

Advanced projects may also straddle genres. When the course was taught at the graduate level, many pieces were open to reception and interpretation. Kinda Al-Fityani, a student with a hearing disability, produced an ontology for hearing. In Melissaand Kinda (2004) she presents a scene heard by Melissa with a full range of frequencies and then re-presents this scene as she, Kinda, hears it, with a very limited range of frequencies and with low volume.

Technical considerations —

Sound stems

I use a film re-recording mixer’s approach to educate students on the fundamentals of audio production. The sound stems which comprise film mixes: Voice, Music, Sound Effects and Ambience become the foundation for planning sound only projects. Silence stands as a fifth stem to encourage consideration of volume control and the possibilities of the absence of sound. This gives students four practical, and five theoretical, categories of types of sounds to consider in their piece.

Voice is generally dominant in student projects at the pre production stage, as they imagine and find it easier to tell than to express. Usually, when students first design their projects, they envision one continuous track of voice over wherein a narrator describes the action. As a result, there is a need to encourage a reconceptualization of voice. Voice can become musical or work as a sound effect. Vocal utterances such as sighs, cries, screams, breathes, sniffs, etc. can replace words to convey emotion and intent. These methods of construction make the narrator unnecessary. (In their very first sound only project, the project requirements include the absence of narration entirely.)

Music poses a great challenge pedagogically. Though fair use allows for a minimal amount of music in student work, I ask students to avoid pre-recorded, popular music altogether. Music functions as an unfortunate crutch for the elementary media producer. It easily conveys emotion and tone, and beginning media makers rely on this entirely, using the meaning from popular tunes to give their random shots and sloppy editing import. Whereas, an excerpt from the soundtrack of Hitchcock’s Psycho might easily convey fear, here multiple layers of sounds are required to produce anxiety. A hefty footstep, exasperated breath, water drip that increases in pace supported by a base layer of ambient sound from a reverberant street combine to produce the desired treachery. Students also rely on music for pacing. We see this when student editors of audio/video projects lay down a popular song and cut to the beat of this song without consideration of the movement and pace within a shot itself. Students editing for the first time find editing without pre-recorded music to be more difficult, but their learning curve in a ten-week course, is high. Instead, they learn to edit according to the development of the story and the genre of their piece. Tone must be built carefully and deliberately.

By eliminating pre-recorded music in a project, students may still work musically. Voices and sound effects take the place of traditional instrumentation and create rhythms. “Sound effects” typically refers to non-verbal sounds like bells ringing or doors closing, but may also relate to verbal utterances if they are not linguistically based. Sound effects libraries contain many sounds that your students will need to flesh out a story, however these library sounds often lack the breadth of choices needed to express the emotion or tone required for a specific project. While a few library sounds are helpful for the signifieds that are difficult to record, (birds without any background sound, for example), even beginning level makers can use their imagination to create the heartbeat or train whistle that they imagine.

Ambience may include one sound recording made with an omnidirectional microphone, or ambience may be built by layers of sound effect recordings. When working with constructing narratives in the traditional sense, ambient sound presents the greatest challenge to beginning level mixers. All of the sounds in a piece should live on some texture so that sound holes are avoided. Makers at a range of levels can find it difficult to control the base layer of sound in the mixing stage. When the ambient sound exists without other sound present, it must be unnoticeable. When it plays with other sounds in a piece, it must be not doubled, adding twice the volume of ambience or air with texture. Pay attention to the moments when students fade into and out of room tone in their mix and encourage them to record many different varieties of tone so that they have something to play with in the editing room. Require in their first and second projects that a layer of sound be present at all times and encourage them to use ambience as that base layer.

Silence, like black in a film, may help to develop structure, establish the pace, or produce a mood in a piece. Silence in a project can add much needed space or it may be used to intentionally generate a discordant listening experience. What comes to stand for silence differs based on the genre of a given piece. Moments of silence in an experimental piece may be produced through the complete absence of sound, whereas in a narrative or documentary work, ambient sound often constitutes silence. From a mixing standpoint, silence should be produced with a sound, or some type of recording, even if it is nothing but device noise floor. The total absence of information typically represents an error in a beginning level producer’s project and absolute silence would be rare or non-existent in film sound tracks or radio. In audio work, there is always some layer of sound present, and students must realize the difference between creating silence and having silent moments happen because of a sloppy mix.

Scripting the sound piece

Once students have a project idea in mind, moving from the abstract to the practical follows smoothly with the spotting sheet. The spotting sheet is a visual guide for both recording and editing a project comprised of horizontal lines, or tracks, and boxes drawn within these tracks. Each of the boxes represents an individual sound that the producer must record. The spotting sheet helps the instructor understand and critique a project before it is made and, like storyboards, spotting sheets expose potential problems in a student’s work. Finally, the spotting sheet serves as a blue print for the look of the sound product in post-production. Whether working in iMovie, Pro Tools or Final Cut Pro, all digital non-linear sound editing interfaces are based on boxes labeled with the sound itself which can be moved around on tracks in the software. Encourage students to use the spotting sheet in the editing room and to think of it as a screen grab of their future edit.

Working with gear

An “audio-first” approach is available in many different contexts and basic video production programs already have the necessary tools to make it work. Classroom instruction begins with recording techniques: microphone placement, avoidance of wind noise, avoidance of mic and cable noise, awareness of noise floor of different devices, monitoring, slating takes, and use of sound logs. Students work with omni-directional and directional microphones, and condenser and dynamic microphones. As we have both analog and digital equipment available, I first introduce students to the warmth of analog recording (traditional cassette recorders) and then the uniqueness of digital recorders (DATS, mini discs, or hard disk recorders.) Each device presents its own unique problems: drop out vs. distortion, necessity of the limiter on digital recording, differences between monitoring with peak meter as compared to a UV meter.

When assigning projects, require students to avoid noise, record additional sounds in a quiet space (like a closet or sound booth) and record ambient sound (room tone.) We then move to Pro Tools (or iMovie). In the mixing stage, they must control volume levels, avoid pops and dead space, and mask edits. Projects edited in iMovie generally use two tracks and advanced Pro Tools projects employ between 4 and 10 tracks of sound with Auxiliary Tracks and a Master Fader.[2] Having taught this course with Pro Tools 882/20 boxes and mBoxes with Pro Tools LE, I recommend the mBox. At $450 per system, mBoxes come with software and hardware for sound editing/mixing and digitization and can be installed easily in a computer lab by technicians untrained in this specific hardware/software.

For technical instruction, I rely primarily on Stanley Alten’s Audio in Media. It is an expensive text, but there are select chapters which provide the necessary information. I recommend the intro chapters in which he details how the ear works, how microphones work, and the frequency range of human hearing. He defines concepts like amplitude, explains digital sampling, and analog recording. His chapter on mixing works directly from a Pro Tools edit/mix environment, as does David Yewdall in The Practical Art of Motion Picture Sound.

After students have recorded and mixed the "right" way, encourage them to consider how editing and recording the "wrong way" functions. A fascinating project by Matt Test took me to task on my insistence on "proper" recording techniques. Room Tone presents a tour of a fictional Analog Tape Museum. As the documentarian tours the museum with his participant, the piece begins to decompose and distort as the very analog tape stored in that space might. To make this piece, Test found inspiration in David Lynchian sound design. He also employed numerous "mistakes" in sound production including pops, wind noise, drop out and distortion. [3]

Listening to the final cut

Once a project is complete, listen to the piece in class and make time for critique. Whether the project is made in iMovie and finished on video, or made in Pro Tools and burned to CD, try to find a space where the stereo work can be appreciated. Dim the lights, ask students to listen carefully for content as well as technique and discuss the piece briefly. Then play the project again. The unique experience of sitting quietly in a classroom, heads bowed, taking in a project without video, is a breath of fresh air in a world dominated by visuality. At first, students are jittery and awkward in this new environment. Folks may look around, becoming distracted and tired easily (though the class draws many students with experience in music production or with a greater appreciation for music in general, so their ears are generally finely tuned.) But over the next few weeks of instruction and critique, you find a new group of sound geeks emerging.

Distribution in audio is more limited than video for student work, however students have submitted projects to various film festivals that call for sound work, radio outlets that play independently produced pieces, and websites which consider sound art. Students find that they can distribute their audio work to friends and family more readily and simply than with video. Audio CD’s are mass produced by the student makers through iTunes and mailed and passed around to the various participants and interested parties on the cheap.

Challenges

There are a few challenges to note in the sound first classroom. Though I mentioned this earlier, I wish to reemphasize that students should avoid music and narration. Often students rely on music to convey tone and rely on voice to convey meaning, which can undermine the basic theoretical elements of the class. Music and narration also limit technical and creative development.

It is important to have a variety of microphones at the student’s disposal so that they can hear the difference between directional and omni directional mics. Having the option of recording on analog and digital media is also useful as they develop their "ear" as makers. Though I recommend using iMovie, please do so with the understanding that it was never intended as an audio mixing program. This software seems to be the lowest common denominator in video production programs, so it is incorporated here. However, the benefits of iMovie as a way to cheaply enable the audio-first classroom greatly outweigh the drawbacks and limits of the program as a mixing environment.

The professional future of sound work is not as visible, but there are opportunities for radio production, sound recording, sound mixing, and sound art beyond this classroom. Finally, the biggest complaint from students, which I see as a positive side effect is that now they hear everything. Where their living spaces used to be comfortable, now they recognize the noise and volume of sounds around them. My own challenge, which I am still negotiating, is simplifying the approach. Once you start considering sound and you realize how much relates to sound production and sound theory, the term is never long enough.

Notes

1. Though the application of sound to Peirce’s signs is my own, the discussion of Peirce and semiotics agrees with parsing of Peirce in Stam, Burgoyne and Flitterman-Lewis.

2. Though iMovie displays only two tracks for audio, audio clips may be laid over other audio clips. Because this makes for a confusing edit, beginners only utilize two or three pieces of audio at a time. To work with audio in iMovie: digitize recordings from the camera. These clips come in with image and sound. Cut and paste clips from the Bin into the tracks below. Extract audio and unlock audio and picture. Finally, delete picture. Now audio exists in iMovie as a clip in and of itself which can be moved around the track (but not stored in a bin as audio only, unfortunately.) Under preferences, one may select to see clips as "audio waveforms."

3. Test’s piece is also noteworthy because of his application of pitch shifting. During the course of the interview, the documentarian’s voice shifts in pitch to "become" the voice of the participant. Test used this method to reveal how audiences unquestionably trust the facts presented by documentarians. Test continued the ideas put forth in this project in a sound project produced as an honor’s thesis in 2004.