Erik Aadahl Special: Reader Questions

So, this is the end of the Erik Aadahl Special. Hope you’ve enjoyed it. Here are the answers to all the questions made by the readers on comments, email, twitter, etc. Thanks for participating!

Designing Sound Reader: Hi Erik, I see you’re using the 191 for most of your SFX gathering. Do you ever record in other stereo formats like XY or ORTF?

Erik Aadahl: Some people find the 191 Matrix box cumbersome, but unfortunately it’s needed to power the mic no matter if you shoot XY or MS because of its funky pin arrangement.

I never use the 191 in XY mode. I like the flexibility of shooting MS when I’m editing, to dial in a stereo spread that I like. When I record, I set my Sound Devices 722 to monitor MS-decoded over the headphones.

But if I want to smash up a mic I’ll use a more bulletproof XY mic like the AT825. For atmospheres, spaced pairs can give a nice wide image too. I haven’t shot ORTF (microphones angled 110 degrees, 17 cm apart) since film school but I do like the effect of it. 99% of the time I shoot MS.

DSR: Hello Erik. I just finished to read your interview. Thanks for all the questions, terrific stuff! I read you studied in the university and got lots of learning there. I’m curious about the status of a self-studied person (like me) in the film sound industry. Did you know someone who learned sound design by himself? I really worry about it and would be great to hear your opinions about this kind of education.

EA: Yes I went to film school, but I have to say that most of what I learned was on-the-job. There’s no match for learning from a mentor and just going through the experience. A lot of what I know is from endless hours experimenting and working. The best education was starting in television, where I had to crank out an hour’s worth of editing every 5 days, switching from sci-fi to period dramas to animation from week to week. I learned more practical knowledge that way than in film school. But I still have tons to learn. The learning should never stop.

DSR: What was the special trick with the rack of plugins controlled by a Theremin Erik Aadahl discovered when he was working on Transformers – Revenge of the fallen.

EA: I’ve been getting that question a lot. The most important thing I want to convey in all these sound design dialogues is this: it’s about the process, not necessarily the end goal. For me, the art of sound is not about reproducing the work you like, but experimenting, improvising, making a challenge for oneself and finding your own voice. That’s the fun of it!

I like to be open about how I make sounds, but the modified theremin is one of the few things that I’d like to keep secret. With it, we made signature sounds for Transformers ROTF that I’d like to keep exclusive to that universe. We’ll be evolving the technique even more in Transformers 3.

DSR: How much percent of the sounds you create are sort of “finished” in your mind before you even start working on them? Respectively, when do you think is it better to have a pretty detailed idea of the sound in advance or on the other side, when do you prefer to sort of jump into the wild and go by intuition? Hope this question will be picked ; ) All the best and many thanks in advance!

EA: Great question. When I start working, I’m looking at a blank canvas that I can fill with any combination of colors. It could be minimalist, it could be complex. When I begin, the possibilities are infinite.

Sometimes I know exactly what I want it to sound like, and then all I have to do is reconstruct the sound in my head. I also like to make noises with my mouth when I work, and use that as a “sketch” for design.

But other times I don’t know what the best option is yet. I might try something and decide later that it can be better. In this case, I throw out my work and start over.

It’s good to have an improvisational attitude though, and not get stuck in any one way of doing things.

PS: I think you were looking for a way to decode MS footage within Soundminer. You can do that by typing “M/S” into the >>Channel Layout<< column. (works for playback AND transfer)

You are absolutely correct! My associate P.K. Hooker adds that for the ability to adjust the stereo spread — beyond the default 1 to 1 decoding in Soundminer Pro — putting a VST like Waves Stereo Imager in the VST Rack is a great method. Thanks for the heads-up!

DSR: Hi Erik. I’m following all the posts from your special. I really love all those great articles. We know a lot about your actual job, your work on transformers, etc. But what about your start? What were the first sounds you create? What are the best experiences you had in the early days?

EA: The first sounds I made were for music, when I was still a kid playing with MIDI. But once I got into television, because of the speed required for the short schedules, I almost exclusively used sound effects libraries. It was a really important learning experience and forced me to be clever and manipulate the “wrong” sound into the “right” one. The first design I did in a serious way was using an old Eventide harmonizer, hooked up to my KT-76 keyboard which I’ve had since I was 17 and still use to this day. It was for a PBS series called “The Shape of Life”, for an episode featuring dragonflies fluttering around. I didn’t have any good recordings of dragonflies, so I cheated with some design. I recorded wing flaps on the foley stage using a piece of stiff plastic clamped to a bicycle wheel spoke. I spun the wheel, letting the plastic slap rhythmically against different textures to make fast wing beats.

Then I took those sounds in the harmonizer to make dopplers out of them. They became the sounds of little dragonflies flying past the camera. These days, with Waves and other tools, it’s much easier to make dopplers.

Another fun experience I had was for a Disney television movie that had soapbox derby races. I remember squeezing into one of those tiny carts and racing down a hill with zero control, headphones on and recording to DAT. The recording turned out terrible, with tons of wind noise and mic handling. I’d do it a little differently these days; a windsock and shock mount would be a good start.

DSR: I’m a big fan of Transformers, and as a sound geek also love your work on the robots and all the sound effects there. Each robot has a lot of different sounds. I can detect some of the sources of those sounds, but some others are rely difficult to me. Could you tell me more about the sources for the robots transformations, or how you process some king of sound, etc. Many thanks erik. Keep the amazing work!

EA: Some of my favorite sounds are sounds that you can’t tell if they are synthetic or real. One of those is the sound of Frenzy hacking into Airforce One’s network. It sounds synthetic, but it’s actually a very squeaky hinge in my kitchen. I could swing the door open and closed and make all sorts of “SQEAAAAAAAAAAIIIIIIKKKKKK!!!!” noises that sounded to me like a modem spitting out shrieks of data. There’s a bunch of stuff around my house that badly needs WD-40 lubricant, but won’t get it until I’ve recorded the squeaks.

Some other sounds are totally processed. One I like is the weapons power-up for Blackout in the first movie, when he destroys the Soccent military base. You hear a whine that starts low pitch and starts to rise and rise and rise until it turns into a laser blast-style gunshot. I made this exclusively with a signal generator, Waves SoundShifter and Altiverb. I took a tone, graphed a slow rising pitch bend from -6 semitones to 0, and at the peak quickly dropped the pitch to -12 semitones. I put some Altiverb on the peak of the pitch, so it could ring out to give a laser energy decay feeling. It sounds like a complicated series of sounds, but it’s actually as simple as it gets :)

DSR: 2 questions: 1) when it comes to processing audio what plugins do you always you head back to and was there something specific you used to create that quintessential electronic vibrate that really defines the Transformer feel? And 2) After you finish working on a film and having the freedom while working to record such fantastic sounds (at the film budgets expense), do you keep the sounds or do they all remain the property of the studio who fits the bill? If you do get to keep them have you ever thought about releasing/selling libraries?.

EA: I commonly use the Waves bundle, Altiverb, GRM Tools and SoundToys. The Transformers signature electric vibration can be made in a bunch of ways depending on your tools. Rather than say exactly how I did it, and avoid copies appearing everywhere, I encourage people to experiment and come up with their own methods.

Yes, I retain the mastered recordings and design I’ve made over the years. Things put in a movie are the property of that movie. But sounds made in my own time, which there are many, are my own. Maybe one day I’ll make some public.

DSR:Hey Erik. You’re a fantastic person. Thanks for sharing all your great knowledge. I’m just wondering what is your favorite sound design technique? Is there a technique or a specific process/chain with effects you use a lot? If so, could you told us what are your favorite plugins or tools to work with?

EA: My central design technique is recording different flavors of sounds specific to the movie, palettes of sounds I use to edit and design with. Before I record, I’ll often make a list of categories of sounds I want to collect.

If I know I’m doing a robot movie, I record anything that is appropriate for that: the sounds of every motor and servo I can get my hands on, for example; the sounds of energy would be useful too, so I’d record anything buzzing or groaning. That might involve recording things that you’d never associate with “energy”, like groaning metal doors or the rumble of a washing machine or my dog growling.

I guess I’m trying to say that it’s important to allow myself to think abstractly.

My most common tools are the simplest ones: the ProTools pitch tool, EQ and compressor. Waves is also a favorite tool. Like I mentioned, I use SoundShifter and Doppler a lot.

DSR: In following this site and the works of the sound designers featured. One thing has come to my attention. The lack of women, I have only ever seen Anne Scibelli mentioned. As a professional in the industry for many years, (with some great titles under your belt!!!) what is your take on the lack of women working within the field?

EA: You are absolutely correct, women are under-represented in our field. I don’t really know why. A lot of the women I work with are Dialogue/ADR supervisors and editors, or work in foley. Ann Scibelli is definitely a big inspiration.

On “Shrek Forever After”, I have the pleasure of working with my friend Anna Behlmer, who has mixed almost all of Dreamworks Animation films and has something like 9 Oscar Nominations under her belt. She’s definitely another woman to look up to.

DSR: I’m still wondering what was this special trick with the rack of plugins controlled by a Theremin you discovered on the last Transformer… Hope you’ll let people know what was the secret one day ;-)

EA: Thanks for the question. Please refer to question #3 ;)

DSR: First of all, Thanks for the interview, really opened my eyes. Now, I’m new on this. I want to know what’s the best order to work on sounds on a motion picture? First Voice then Ambience/Room tones or should i start with the first layer sounds? Any book you recommend for beginners?

EA: There’s an old adage: “Dialogue is king”. Because of this, dialogue is often mixed first. Re-recording mixers Andy Nelson and Anna Behlmer have a nice technique where Andy does a pass of final mixing on the dialogue first, then music, then Anna comes in and does her effects pass, balancing against the dialogue. Usually, you don’t want the audience to strain to hear the actor’s lines. When I work, I always refer to the dialogue track and balance against it.

A good book that will give you an overview of sound for film is Tomlinson Holman’s “Sound for Film and Television”:
http://designingsound.org/2010/03/new-book-sound-for-film-and-television-third-edition-by-tomlinson-holman/

Tom was one of my teachers in film school and the inventor of THX. He’s got some fantastic insights.

DSR: I thought I had the coolest sound design trick on the block with a few of those hematite magnetic balls to record. Such interesting sounds. Then I watch the SoundWorks vid on Transformers 2 only to find that you have already taken this idea to the next level with the Reedman robot. Great work. How many of those magnets did you work with? And any tips on recording them? I’ve been trying some contact mics.
How about the processing afterward? The sounds seemed to cascade together as the robot formed. Mindblowing secret techniques are welcome :)

EA: Hah! Cool you picked up on those magnets too. I worked with two different types of magnets: the round types you saw in the video, and some oval-shaped ones I found. The oval-shaped magnets gave more interesting “twirl” sounds. The round ones were sharper and buzzier.

I tried recording them on the first Transformers movie, but couldn’t really get it to sound right. The mic I was using wasn’t directional enough, and I wasn’t in a quiet enough recording environment.

The sounds of the close up ballbearings zipping around in “Revenge of the Fallen” are completely unprocessed, believe it or not. That’s what I love about those magnets.

The sounds for when the balls combine were made by individually cutting each little “pop” a thousand times to make an exact clicking zipper pattern. A short delay creates an electric feel that weaves in and out of the zipper sounds.

DSR: Sometimes I don’t have the field recordings I want or need, and I’m unable to get them.When this happens to you, how do you work around it?

EA: If you don’t have the ability to record something fresh, you don’t have it in a library, and you don’t know anyone who has recorded it that you can borrow from, you need to get clever. This is the part of sound design I love more than anything.

Each and every sound is nothing more than a collection of frequencies that change over time. You can use these frequencies like paint, combining them to make new colors.

On Superman Returns, I needed the sound of a continent rising. This would have been impossible to record, and even then probably wouldn’t sound very expressive anyway. So what I try to do is think about scale; what can I record on a small scale that magnified is similar to what is up on screen. So for the rush and roar of water, I used waves on the beach. For the crunch of rocks, a made a steady rumble out of crunching rice cakes. These smaller sounds, when slowed, become magnified and grow in scale.

I try to think of all sounds as being on a continuum of reality. Different sounds from tiny to huge are just on different scales. The same way a nucleus resembles a planet, a Ritz cracker snap resembles the Earth splitting open.

So if I don’t have a sound, I try to think: “what does this resemble?” … “What can I record that is similar to this thing?” In one movie I used the snap of a firecracker, pitched to 1/10 speed, as a distant explosion rolling out over a vast canyon.

If it’s something really specific you don’t have a way to record, like a 1941 Spitfire prop engine, then you might be screwed. If you can’t record it, find it from a friend, or use a library effect, then the best you can do is reproduce it as accurately as you can. The internet is a great tool to do some research; find out what a Spitfire sounds like so you can best match it.

DSR: Hello Eric, big fan of your work. I was curious what are some of your favorite plug-ins for processing recorded sounds? I remember hearing in some interviews that you like the soundtoys plugs, and others. I also would like to know what you are using for doing all your pitch shifting (Serato PitchTimePro, X Form?)? I am assuming that you do most of all your editing in Protools.

EA: Yes, I do all my editing in ProTools. Above I mentioned some of my favorite tools. Most of the pitching I do is with the Soundminer search engine pitch function, the ProTools pitch tool, Waves SoundShifter and once in a while Pitch N Time. If I want to do a realtime performance, I use my keyboard pitch wheel triggering Native Instruments Kontakt.

With Soundminer, I sometimes slow things down to extremes. Recently, I took a recording I did of a river in Thailand, slowed it to 5% speed, and recorded it into ProTools via Rewire to make underwater ambiences.

Soundhack is also a great tool for more extreme pitching. On “I, Robot” I used it to cleanly slow hummingbird chirps -40 semitones to make robot motors.

Thanks to everybody for the great response and all your excellent questions!