Monday, August 27, 2012

I am happy to continue our series of interviews with Hamilton Sterling - sound designer, supervising sound editor, effects editor, and mixer who has worked on The Dark Knight, War of the Worlds, and Master and Commander: The Far Side of the World, as well as many independent films. He recently cut sound effects on MIB3 and The Host, and worked on Terrence Malick’s To the Wonder, and The Tree of Life. Hamilton was the supervising sound editor, sound designer, and re-recording mixer on Tomorrow You’re Gone by David Jacobson, and has edited sound on the films of P.T. Anderson, Christopher Guest, Andrew Dominick, and Steven Spielberg. To date he has worked on over seventy-nine feature films.

Recording a Demolition Derby (photo: Michael Dressel)

Matteo Milani: Thanks for your time Hamilton! First of all, could you tell me a bit about your education and musical background?

Hamilton Sterling: I come from a musical family. My mother and aunt sang four, five, and six part harmony by ear. They performed as the Silhouettes on KDKA radio in Pittsburgh, Pennsylvania in the 1930’s. Both were very encouraging of my early musical interests. My aunt, who worked at the local newspaper, often brought home fascinating music: György Ligeti, George Crumb, music from all over the world. I heard the soundtrack to Fellini Satyricon before I ever saw the film. My mother was a jazz fan, and when I was a boy, would take me to listen to local groups.

In high school I played electric bass in the jazz band and upright bass in orchestra. The musician’s union put together an all-star big band for high school students in which I played, and I also performed in the Allstate Orchestra, becoming principal bassist my senior year. I played my first jazz gig when I was sixteen years old, and entered Arizona State University on a four-year music scholarship, graduating with a BA in jazz and classical performance.

From a social context, I also benefited from the cold war belief that the arts held importance, and that America had to out-compete the former Soviet Union in creative endeavors. Music education in elementary school and high school was excellent and generously funded. Unfortunately, for the arts and young artists, those days are gone.

MM: Does your experience as a musician help you in your career in film sound?

HS: I think a sense of rhythm, melody, and harmony is essential in being able to make interesting sound for film. Recent studies show that human beings are wired for music. In studing jazz, the adage of “learn the music theory, then forget it” allows the right brain freedom to improvise from knowledge. I’ve come to feel that cutting and layering sounds is a slow-motion version of improvisation.

MM: How did you enter the movie industry?

HS: Alongside my musical activity, I became obsessed with films. Stanley Kubrick’s 2001: A Space Odyssey became the impulse for much of my early creative life. I saw the film for the first time when I was ten years old. It brought to me an interest in modern classical music, archeology, cosmology, astronomy, AI, cinematography, and special effects. It also appealed to my budding existentialism. That any one object of art could do so much was, to a young mind, amazing.

It may seem hard to believe, but public television at that time played films by Antonioni, Godard, Fellini, Losey, and Bergman, and the art house cinemas were still going strong. I began making short films in the summer vacations between school from the money I made playing music. When I came to Los Angeles in the early 1980s, sound seemed a natural fit. I began by editing documentaries, Warren Miller ski films, industrial films, until I got my first sound effects editing work on Alan Rudolph’s Trouble in Mind.

MM: Choosing the right sound(s) to picture. An art form?

HS: Because sound editing began as a technical blue-collar job, many people had the impression that what we did was nothing special – “A monkey could do it” was the often-heard refrain. Here in America, we never developed a militant artistic union, an aesthetic of labor, if you like. Of course, because many of the corporate films being made today are empty of artistic merit, to say nothing of merited thought, it’s no wonder that art and labor are still dirty words. Choosing the right sound is an artistic, and surprising moral endevour. Thinking back on choosing sounds for films I did twenty-five years ago, if you had a scene in a rough cityscape, one might choose a black voice yelling in the street. The effect of that implying threat (at least to a certain part of the population). And that is a choice that can subtly further social injustice. I’m not saying that an artist should proscribe their work, but one has to very conscious of one’s choices, because in mass entertainments, those choices may reach millions of people, and they have consequences.

HS: When I worked for Richard King, I did sound design and sound effects editing. Occasionally, I did sound effects recording. The frog rain in Magnolia was re-recorded against the reflections of a cliff in the Angeles National Forest. We set up two speakers facing away from the microphones, and slowly rotated the speakers toward mic, giving the playback the effect of distant frogs falling toward us. Eric Potter and I also put a speaker in a pickup truck to record a playback of previously sampled frog elements. Eric put the truck in neutral and steered into the quiet, distant valley. The sound was incredibly bizarre, and I ruined the heads and tails of multiple takes by laughing. It’s always fun to get out into the world to record, and Richard loves to record.

Chris Flick did all the programming and cutting of the foley, and we conferred with him on elements that effects needed help with. As to the sound effects editing, Michael Mitchell and I did a lot of heavy lifting. Richard cuts sound effects as well, and in this day and age, I admire him for it.

MM: Would you like to explain your role when working with the other members of the sound editorial?

HS: When I’m not supervising, I try to communicate very specifically with the supervisor and my fellow editors: what are the stage delivery requirements, the predub breakdowns, or whether there will even be predubs, who is doing what in terms of special design. Hopefully, the supervisor has been able to spot the film with the director. If there is little in the way of specific information, I get a sense of the aesthetics of the director from what’s on the screen. If what you see is a pack of cliches, you know what to expect. If there are few cliches, you have reason for hope. At the end of the day, your work is only as good as the director’s vision, and their courage to take risks.

On the Cary Grant stage at Sony on Morning (photo: Leland Orser)

MM: What are the musical tools you use to boost your sound designing workflow?

HS: A number of years ago I purchased a Kyma sound design system from Symbolic Sound. It is very useful in producing unique sounds. It’s always inspiring. I also use my old Kurzweil K2000 as a midi controller, a Haken Continumm fingerboard, and a PC2R. In studio I use Millennia mic preamps. For field recording I use Schoeps mics and Sound Devices mixers as well as different contact, ribbon, and dynamic mics to gather my sounds. My bass is fitted with a midi pickup that I also use through an Axon to trigger my samples.

MM: Sound processing: can you give us a description of your studio gear?

HS: I use Pro Tools HD3, Genelec 5.1 speakers with a MultiMax monitor, and many plug-ins. For picture I project HD through a Decklink card to a nine-foot screen. Aside from Kyma, Haken Continuum Fingerboard, Kurzweil, and Axon, I have on occasion used Beat Detective in Pro Tools to place rhythmic structure onto multiple effects, and Melodyne to re-engineer animal vocals. I just started using Battery as a sampler. Altiverb, Pitch ‘N Time, Lowender, and GRM tools are staples.

MM: Can you reveal to us a "making of" of a very special sound effect(s) or a sound sequence?

HS: I’m very proud of the storm sequence in Master and Commander: The Far Side of the World. Making the scene dynamic given the similarity of frequencies both in the water and the wind was an interesting problem. First I began by cataloguing the ocean sounds into frequency ranges from low to high. I catalogued the wind in a similar way. At the time, Warner Bros. had terrible editorial rooms that had not been updated since the 1950s. Not only was there no surround system, but there was no wall treatment of any kind. Anyone who has had to edit endless water and ocean effects knows that in a box-like room with hard surfaces, audio hallucinations in the white noise of the waves can produce boat engines that aren’t there and other weird effects. So I brought sound blankets from home and hammered them into the walls. I scavenged an extra pair of speakers from an adjoining room and built myself a primitive surround system. I cut the sequence in 5.1 tracks that would mirror the mixing console and internally panned and level-set everything. Because the water visual effects were actual layered shots of waves, they changed constantly. But the first time I cut the sequence, I really liked the rhythms I had found. As the sequence changed, I was determined to keep this poetic kind of rhythm, so instead of just cutting up the tracks in conforming them, I took the time to find new rhythms and create the sequence anew each time it changed. I was very tough minded in this approach. Fortunately, I was given the time to do this – something that is unique to this day. I then cut the hard effects in the same 5.1 style and processed the siren’s call of the wind in the rigging through Kyma. When the edit went to the stage in this form, the mixers worked on it for a couple of days, trying to tame it. Much to their kindness, they told me they decided to put it all back the way I had originally laid it out because it had a life to it that their smoothing of the rough edges took away. That’s the way the sequence was released.

MM: What do you regard as your most important credits in your career thus far?

HS: The Tree of Life and The Assassination of Jesse James by the Coward Robert Ford are my two favorite films. They’re closest to the feelings I felt as a young man introduced to the great European cinema: thoughtful, unsentimental, mysterious. They capture something of eternity.

MM: How do you get involved with the movie “The Tree of Life”? What kind of approach did you take on foley?

HS: I have known the sound supervisor, Craig Berkey, for many years. Erik Adahl had hired me on Transformers: Revenge of the Fallen, and mentioned me to Craig. We fell back together again, which is the way of the film business. Andy Malcolm of Footsteps Studios walked the foley. Realism and proper perspective (using multiple mics) is very important. Because Mr. Malick often uses non-synchronous production takes, foley is used to ground the characters within the scene. It becomes another part of his pallet. When it is absent, that too becomes a color.

MM: Can you describe how some of those sounds were accomplished?

HS: Andy originally used his house as a foley studio. It’s out in the middle of the Canadian wilderness – forty-five minutes from Toronto. The stairs really creak, he never sweeps his kitchen. It’s all real. (Just kidding about the kitchen.) Now he has a fabulous studio a stone’s throw from his house, so you get the best of both.

MM: How was the communication with the director and the rest of the team?

HS: I was only on stage for a few hours during our first temp mix. But I was struck by Mr. Malick’s graciousness. I truely admire his work, and have since I first saw Days of Heaven as a youth.

MM: To mix "in the box" in sound editorial before the final dub: what are its pros and cons?

HS: Unfortunately, schedules now seem to only allow the sound effects to be mixed in-the-box. Even on the most well-financed films, mixing in-the-box is now common. On Knight and Day (James Mangold), we pre-mixed all of the sound effects and ambiences into 5.1 groups and kept them virtual. We had a couple of weeks to adjust these pre-mixes on the mixing stage, as the console on the Cary Grant stage at Sony could mirror Pro Tools. But the number of tracks and the constant changes necessitated having to continually re-mix added elements in-the-box. I recently did some work on MIB3, and at least for the temp mix, the effects were pre-mixed in-the-box, then taken to the stage for final adjustment. Keeping a somewhat traditional separation of elements is helpful for conforming, as well as giving the sound effects mixer creative input. If you set your editing room up correctly, it can work out quite well. Of course, sound editors are not being paid as mixers, so there are ways in which this situation is financially disadvantageous. But it is creatively rewarding. For independent films, the future is here. With the track counts of the new Pro Tools HDX cards, traditionnal mixing facilities will have an increasingly difficult time staying afloat. Unfortunately, many fine mixers will as well.

MM: A networked environment: can you describe the importance of a client/server architecture in sound post production for a feature film?

HS: It’s great to have. On Knight and Day we were editing at another facility before we moved to Sony for the mix. Sony has a nice server system for moving your work to and from colleagues as well as mixing stages. Structuring the folder architecture on the server is extremely important. Knowing exactly what elements have come from the mixing stage, what needs to be updated, what needs to be mixed, may seem simple, but with multiple versions, competing creative interests, and huge amounts of data, organization and terminology is paramount.

MM: Sound effects editing for multichannel-surround: what are you spazialization techniques?

HS: I edit for the 5.1 pre-mixes. When I do have to spread an effect, I’ll use a little delay, reverb, or the Waves PS22. I’ve recently begun using the Schoeps free DMS plug-in for three channel field recording that decodes to 5.0 surround. I love the Schoeps plug-in. Now I record all my sounds on three channels and decode to 5.0. Even simple sounds, like a light switch, pick up the character of the room. It’s a facinating way of creating a feeling, using these simple multi-channel sounds. If the simple sound creates an interesting space, I’ll work backward, and using Altiverb, try to get the rest of the sounds of the scene into that same environment. Of course if it doesn’t work, you still have the mono or MS stereo recording.

MM: You made a film in the late ‘90s. Did you do your own sound?

HS: I supervised it and cut, but I had a number of wonderful friends from the sound editing and mixing worlds who helped me to complete it. Because of current events, I decided to prepare a new version of the film, Faith of Our Fathers, for Blu-ray and DVD, and re-construct the sound for 5.1. As I began to re-assemble all the elements, I realized that we who started in the business in the mid-1980s lived through a radical transition in our work. At the time magnetic film was all there was. But by the time I shot Faith of Our Fathers in 1991, the digital world was just beginning. Faith was originally shot on 16mm film with a 1:1.85 ground glass for theatrical blow-up to 35mm. All the dailies were 16mm. I had obtained from a friend, one of the first Sony D10 Pro DAT recorders in the country. It was strictly grey-market. I thought I could mix and record the production sound to one channel and put a 60 cycle pilot tone on the other so that when transfering to magnetic stock (both 16mm and later 35mm) the transfer machine (“dubber” we called it) would stay in sync. So, over a long period of time, friends and I sunk and coded the 16mm dailies. I cut the picture, and when it was time to prepare the track for 35mm mixing in 1995, I used a 16mm to 35mm synchronizer to phase the new 35mm dialogue to the 16mm worktrack. The dialogue was cut on mag. The backgrounds were a combination of 24 track two-inch and DA-88s (the bane of all mixers at the time). And most of the sound effects were cut on an early version of Pro Tools which were then transfered to DA88. When I decided to do my 5.1 re-mix, I had 35mm mag to transfer to Pro Tools, DA88s (I still have one of those boat anchors which work!), and DATS with original production and music. When I think that the process involved 16mm mag, 35mm mag, Pro Tools, 24 Track, DAT, and DA88s, it becomes evident that the transition from analog to digital was quite messy. The other shocking thing is that I was able to finance my film on a sound editor’s salary (which is the reason it took so long to complete).

MM: What are your thoughts on the boundaries between music and sound design?

HS: Having recently released Migration, I can tell you that creating a 5.1 programmatic musical soundscape is a wonderful artistic process. Combining a purely aural narrative with the abstraction of music and processed effects blurs the creative experience. I don’t mean just adding a sound effect to a music track, I mean creating the entire living thing as one artistic statement. There is a universe of possibility in the soundscape form, and because of my musical life, the addition of ambiences and effects to create emotion is a fullfillment of who I am. Other examples of soundscapes that I like can be found in the plays of Romeo Castellucci’s Socìetas Raffaello Sanzio and the Wooster Theater Group, both of which I find inspiring. As to the boundaries between music and sound design in film, I would say they have been nearly erased. I just completed the film Tomorrow You’re Gone, with Michelle Monaghan and Stephen Dorff. Kyma was used extensively in creating a very musical soundscape in which to set the traditional effects.

MM: About your album releases: do you think that detectable technical processes are an integral aspect of the composition’s overall aesthetic? Is it important in this composition that the listener is aware of the technical processes?

HS: The album Rise and Fall is made mostly of live loop improvisations featuring fretless bass, acoustic bass guitar, and midi-following synths and effects. It grew out of musical feelings, a very simple midi-synth/live stereo mix chain, and the need to not multi-track, or manipulate the live performance. In that respect, technical qualities like midi delay, or tracking annomilies by the Axon pitch controller, were of secondary importance to the spontanious capture of the music. No meta statement should be implied from these technically primative recordings, other than they were all done with as little post-production as possible. As for Migration, the soundscape album I created with Grammy-winning musician Jimmy Haslip, that is a piece that was conceived and composed for surround. It’s feelings and scope are purposely cinematic.

MM: What's the most important tip you've ever received regarding sound?

HS: Often on big films, the amount of audio ideas brought to the process can be overwhelming. Sitting on the mixing stage listening to Steven Spielberg, Michael Kahn, and John Williams discuss how best to tell the story of a scene on War of the Worlds, I was struck by their equanimity toward music and sound effects. For them, it is all about story. Their years together have created a language around this idea. What tells the story in a particular moment, and what elements do you have available to do that? An agreement on what the story is allowed them to know what to emphasize on the track. Other filmmakers see story differently, or dissect story as myth and power, and therefore take a very different approach. I love the sound of Jean-Luc Godard’s films because it is a featured element in the argument. Film Socialisme begins with a line-up tone that moves from speaker to speaker around a 5.1 mix. It introduces his dialectic between sound and picture within the contemporary structure of multi-track films. It’s brilliant and very funny.

MM: What is the most important topic you would want to talk about to make post sound better?

HS: Forcing USA corporations, either by massive tax penalties or heavy import tariffs, to hire the workers in their own country. Too many of our sound jobs are being outsourced. Germany has good unions, pays its workers well, and has an export rate second to none. The old saw that in-country labor produces products that can’t compete is obviously not true: Germany has a 7% trade surplus. The sad truth is that USA corporate profit is at an all time high, CEO salaries are at an all time high, and too many people are unemployed. Corporate contempt for basic decency is the primary problem at this moment in history. It will eventually change, one way or another.

MM: Do you have any advice for anyone who is interested in a career in the sound dept?

HS: With the technology of audio, music, and picture in an ever-increasing cascade toward the infinitely complex, having the time to learn the programs, plug-ins, hardware, softward, picture formats, and optimal work-flow process, is itself becomming a full-time job. Having the mental space to discover why you want to do it, and what doing it even means, to you and to society, is something that young people should consider. This work used to be the wild west. Most of it, for the time being, now sits inside the corporate world. That is not a world that should be perpetuated. So then it becomes about making art, with no potentially viable means of making a living. Last quarter Migration streamed 2500 times and made 3 cents. So is this a risk you are willing to take? Do you see the world differently, and have something to say about it? If so – and now I’ll paraphrase Stanley Kubrick’s advice to young filmmakers – “Get a camera, as soon as you can, and start making films.”... or music, or soundscapes, or installation art. If you are meant to work on a handful of great films in your career, somehow, with luck, you will make it happen.

MM: Silence is mentioned a lot when discussing sound. What was your approach in its usage?

HS: As John Cage pointed out, silence is never truly silent. But one must be silent in order to listen.