MoviesOnline sat down with Academy Award-winning sound designer Ben Burtt at the Los Angeles press day for his new film, “WALL-E.”

I could reassemble the Wall-E vocals and perform it with a light pen on a tablet. You could change pitch by moving the pen or the pressure of the pen would sustain or stretch syllables or consonants and you could get an additional level of performance that way, kind of like playing a musical instrument. But that process had artifacts in it, things that made it unlike human speech, glitches you might say, things you might throw away if you were trying to convince someone it was a human voice. That’s what we liked, that electronic alias thing that went along with it, because that helped make the illusion that the sound was coming from a voice box or some kind of circuit depending on the character.

When Wall-E is going fast, he needed something higher pitched and more energetic. Once again, I went back through my memory of things. I had recorded bi-planes a long time ago for Raiders of the Lost Ark. The old 1930s bi-planes have an inertia starter. It’s a mechanical crank that cranks the engine up. You do it by hand and then clutch – you connect it and it makes a wonderful whirring sound. So I thought I want to get that and do more with it. I couldn’t bring a bi-plane into the studio but on eBay I found an inertia starter, bought that again, and brought it in. So we built these props for many things. You know, it’s a tradition in animation to have sound effects machines. This goes back to the earliest days of Disney cartoons -- like wind machines and blowing machines and things like that. We actually built several things so we could perform Wall-E sounds that way.

I also love the history of sound effects and there is a great opportunity working for Pixar and Disney because you’re in touch there with a legacy of sound effects creativity that goes back into the 1930s. They used to build all kinds of machines. There is a machine that does flying insects, there is a machine that does a talking clock spring. They’ve got an archive of these machines out there in Burbank and I love that and I look at what a sound effects man does and I love the table top props and things like that. It’s the style.

Oscar-winning Sound Designer Ben Burtt, who came to Pixar to do both sound and character voice work on WALL-E three years ago, discussed the challenges of animation with AWN.

His favorite WALL-E sequence is actually one of the quieter ones: "Of course, I love it when we do everything that's supported by sound effects ulike when the probe ship lands and EVE comes out and WALL-E first sees her. But my favorite section is when he continues to follow her searching for life forms because there is virtually no dialogue except for a few sounds from WALL-E. It not only has a lot of suspense but also a little romance, and very much the ambience is there with the music and sound effects for support. It's just a wonderful array of tones."

Burtt will continue working at Pixar but has no idea what his next project will be. "Obviously they won't be doing a robot film right away, but I guess JOHN CARTER OF MARS might [be discussed]. There are a lot of great ones to talk about."

Wednesday, June 25, 2008

Double Oscar-winning Hollywood sound designer Randy Thom has received an Honorary Degree of Doctor of Music from Edinburgh University today. Mr Thom, who has worked on classic movies like Return of the Jedi, Apocalypse Now and Forrest Gump, won Oscars for his work for space drama The Right Stuff in 1983 and animated film The Incredibles in 2004 and is described as a pioneer of "sound design".

A University of Edinburgh spokesperson said: "Randy Thom is one of those special people who have succeeded in changing the way a whole art form is conceived. The University is very pleased to be recognizing the significant contribution that he has made."

Tuesday, June 24, 2008

Electroacoustic music is in a sense more democratic than pop music: You don't need a tremendously expensive studio to compose your music and no huge international record companies are likely to promote your works, so composers from around the world have more equal chances of being heard. Do you find interesting differences in the electroacoustic music composed in different parts of the world?

There are different ‘schools of thought’ producing different sounds, different approaches to time, and different philosophies of public performance. But these schools of thought extend across geographical boundaries. As you point out, we can hear music from all over the world on the Internet.

Ironically, now that more people have an equal chance at being heard, it can be harder to actually get noticed; people are so overloaded with distractions it’s difficult to get anyone’s attention, much less hold their attention for the period of time necessary to discuss and to build upon what has been said and done previously.

During the middle ages, an educated person might have owned one book and they really studied and memorized that book; now we have so many books that we might not even read all of them completely, much less memorize them; we might prefer to get the ideas from the author’s blog or online video instead. A similar thing is happening in music; it is easier to find breadth than depth. This is the perfect scenario for accelerated evolution during a period of rapid change in the environment.

Monday, June 23, 2008

To that end Mr. Stanton enlisted the man who created the grammar of the “Star Wars” robot R2D2, the veteran sound designer Ben Burtt. Mr. Stanton wrote a conventional script — “Hi, I’m Wall-E” — and Mr. Burtt essentially translated the dialogue into robot, something he calls “audio puppeteering.”

“If you take sounds from the real world, we have a subconscious association with them that gives credibility to an otherwise fantastic concept,” Mr. Burtt said in a telephone interview.

The result is a film where the sound is as significant as the visual. One hears echoes of E.T.’s throat-singing (“E.T” is another Burtt film), and when Wall-E moves, the sound comes from a hand-cranked, World War II Army generator that Mr. Burtt saw in a John Wayne movie, then found on eBay.

“We all thought about Charlie Chaplin and Buster Keaton,” Mr. Burtt said, “this energetic, sympathetic character who doesn’t say a whole lot. Most animation is very dialogue heavy. There’s dance, constant talking, punch lines. We used to wonder: How will we prepare the audience?”

Burtt created more than 2,500 sounds for the movie. By way of comparison, Burtt made between 800 to 1,000 sounds for "Star Wars" films and 700 for an "Indiana Jones" movie."It had to go deeper than we did with R2 because Wall-E carries the weight of the whole movie," Burtt says. "We needed to give him a large range of reactions - he's curious, surprised, desperate - but none of the sounds he makes are words, at least, not as we understand them."

ANDREW STANTON: Yes. The one thing Ben Burtt couldn’t simulate was a female voice himself. So if it needed to be neutral or male, it was easy for him to be the source of anything that had to have a human element to it or an inflection. But because we wanted a very obvious feminine source, fortunately Elissa Knight, was one of our in-house Pixar players for lack of a better term. Because we’re in San Francisco and we’re always rewriting our stuff every day, we don’t have access to actors that quickly, so we use people in-house to do stand-in vocal stuff and she had been a stand-in for many movies and was a pretty decent actress. So I called her in to just do all the female stuff and it worked so well and when Ben started effecting it, I said, “That is so good. I’m sorry, I’m not going to look for another actress and re-do all this. She’s great.” So, that’s why. And that’s frankly the methodology Pixar has had in all their movies. If you look back at our casting, it’s all over the map, whether we use A-list, B-list, or employees. What’s consistent if you look at it is, is that the best voice for the character? And that’s why we choose who we do.

DT: The GRM (Groupe de Recherches Musicales), institution that I represent, which celebrates this year (2008) the 50th anniversary, is an accident, it should not exist, and unfortunately it exists!Pierre Schaeffer, who created the GRM in 1958, said that it was fundamental to invent “institutions which were useless, but necessary”. I think that this definition perfectly applies to the group with which I work.Unfortunately, my business is largest than pure music composition. I always say that one day I’ll stop everything and start doing only my compositions. This day is not coming, now I have 56 years, it may be that when I’ll quit, I will finally compose as much as I want. However, I continue to make music regularly.The GRM, a group of 13 people, is actually inside a research and testing department made of 60 people, which is part of the bigger INA (Institut National de l'Audiovisuel), where 940 employees work. This explains how small our group is, against the whole institution.INA, born in 1975, even though it deals with audiovisual, not music, gives us the necessary confidence, it gives us all the necessary funds, the money that we need to develop our activities, gives us the missions that we have to follow.Many structures similar to us that had developed in the fifties in Europe and United States have disappeared. But this “historic mistake” called GRM strangely survived.To survive, is always a challenge, particularly in a world that don’t easily understands us, that is shifting, maintained by economic interests, in a world where the idea of creating a space for composers to build the sound and music of tomorrow, is almost exceptional, I would say.We struggle for the uniqueness of our group, we fight for this mission that for us is historical, but it is written, our activity is not invented, not decided by us; we have contracts with radios, some very important.We have an agreement with Radio France, saying that we have to provide 20 to 25 new music pieces, to produce 10 to 16 concerts, and to broadcast 50 to 80 hours of radio programs per year.So this is our activity base, everything is absolutely positive, doing those three things is what we love. Radio France pays INA to do these jobs and we (GRM) do it.Thirty years ago, the radio did not want to deal with the issue of music that involves technology.Therefore they asked GRM to do this activity, to create radio programs where music with new technologies is presented.On our side, we faced the challenge starting from the end: you need a radio program? It would be interesting to have original music, so let’s build the recording studio for production. If we do original music, airing it directly on the radio without having it heard in a concert was not so believable, so we decided to do the concert for the public before going on air.It would be interesting, for the composers working in the studio, to have original technologies so we started the research for special tools to get the unique GRM sound.After all this, there are new music, and how we handle all this music? So we began to investigate on perception, listening, musical analysis to understand the meanings of this music.CDs, books, internet finally came, from a small, distant beginning that is the radio program we had to produce.This is an up-to-date screenshot of the GRM today.

Saturday, June 14, 2008

This Festival, in its third year, explores the versatility of ISSUE Project Room's innovative house speaker system, designed by Stephan Moore.In the hands of these diverse performers and sound artist, this fifteen-channel installation of hemisphere loudspeakers radically changes the concert experience for both performer and audience.

Each of the hemispheres radiates sound in all directions, activating the acoustics of this unique concert space. Immersive sonic environments are generated, electronic sounds take on the characteristic intimacy of acoustic instruments, and location is liberated as a musical dimension.

OSCulator has been featured in issue 14 of MAKE magazine and has been described as the ultimate tweakable gateway between your Wiimote and MIDI-controlled audio, as quoting Bill Byrne the author of this article.The next version will bring a host of new features, amongst them is the direct support of the TUIO protocol used in the reacTIVIsion software (remember those illuminated cubes on the table that Björk uses live?).The new Presets feature in OSCulator is a way to store different routing configurations and change them on the fly, wether it is by sending an OSC message, or merely clicking in the new toolbar menu. By the press of a simple click, you can turn your Lemur into a sophisticated MIDI controller or a Keyboard controller. The possibilities are infinite.

Wednesday, June 11, 2008

The manuscript of the original version of Metastaseis by composer and architect Iannis Xenakis now becomes music to listen to, in first execution at the RAI Turin, dedicated to the participants of the XXIII World Congress of Architects Uia in their evening gala.In addition to Metastaseis, the programme provides Shaar for string orchestra and two compositions by Edgar Varése : Offrandes for soprano and chamber orchestra and Amériques for large orchestra.

Arturo Tamayodirector

Carole Louis soprano

Iannis XenakisShaar for large string orchestra

Edgar VarèseOffrandes for soprano and chamber orchestra

Iannis XenakisMetastaseis(absolute first execution of the original version)

Tuesday, June 10, 2008

Thomas Ankersmit (1979) is based in Berlin and Amsterdam, plays saxophone, makes electronic music and creates installation pieces with sound, infrasound and modifications to the acoustic characters of spaces.Influenced more by experimental and electroacoustic practices than (free) jazz, Ankersmit focuses on exploring the timbral extremes of the saxophone. His electronic music is constructed out of swarms of electro-mechanical micro-events with an acute sense for detail and intensity, combining the delicate instability of analogue synthesizers with the precision of computer editing and multitracking.

He performed using an EMS synthesizer (a beautiful Synthi A to be precise), a laptop running Max/MSP and an alto saxophone. Such instrumentation could suggest a duality between the acoustic and the electronic or the analog and the digital leading to two different approaches. On the contrary, his 27-minute improvisation did not rely on such oppositions and instead integrated various sound-making means into a single and unified practice. The said improvisation began quietly with buzzes, glitches, hiss and static produced by the EMS synthesizer in conjunction with the laptop. For the first 15 minutes, constant small changes of volume, frequencies and types of sounds generated a consistent sonorous space in constant activity and mutation. It then stabilized and progressively morphed into a Phill Niblock-like drone with multiple pitches. Such an association should not come as a surprise, considering both have been working together extensively in the past years. This drone was suddenly replaced by uninterrupted pitches played on alto saxophone. While circular breathing, Ankersmit would modify the tones, complement them by humming along an octave lower or modulate the sound by moving the saxophone’s bell next to the panel of a table or agitating his hand next to the bell. It created a fabulous drone with a very abrasive texture that fully exploited the acoustic properties of the phenomena involved in the room.

Wednesday, June 04, 2008

The Anechoic Chamber is a room which is acoustically like being high above the ground in the open air because there are no reflections from the walls, floor or ceiling. This means it is ideal for testing the response of loudspeakers ormicrophones because the room doesn’t affect the measurements. It is also the best place for virtual acoustics - generating auralisations of concert halls, city streets and other spaces. The anechoic chamber is immensely quiet which makes it ideal for testing very quiet products or people hearing very quiet sounds.

This work implements a chain of recursive processes leaning on the acoustics of the host space. A web of purely sonic interaction is established, mediated by the room acoustics. A unique sonic narrative is outlined: (1) a "production plant" exploits the room as a kind of energy reservoire out of which raw sound materials are carved; this is accomplished with very low-frequency acoustical feedback making some thing in the room tremble or buzz; (2) this raw material is then "optimised" and "packaged" (computer-processed), and finally delivered to a "consume" system (loudspeaker system); the "wastes" of sound in the room (resonances in peculiar spots in the room, where no human ears can stand) are in turn collected and "recycled" in the production plant.

Agostino Di Scipio (Naples 1962), composer, sound artist and theorist. His works often explore non-conventional approaches on the generation and trasmission of sound, and recently have involved dynamical networks of live sonic interactions between performers, machines, and environments. Some samples of his work are in his monograph CD "Hörbare Ökosysteme. Live-elektronische Kompositionen" (RZ Edition). Di Scipio works primarily in his own studio (L'Aquila, Italy), but occasionally has pursued his efforts as visiting composer in various institutions (Simon Fraser Univ. Vancouver, 1993; ZKM Karlsruhe, 2006; IMEB Bourges, 2003 and 2005). Artist-in-residence of the DAAD Berlin (2004-05). Electronic Music professor at the Music Conservatory of Naples, guest professor at CCMIX (Paris, 2001-2007). He lectured at the University of Illinois (Urbana-Champaign), Johannes-Gutenberg Universität (Mainz) and elsewhere. In Winter 2007-08 he served as Edgard-Varese-Professor at TU Berlin. Author of essays and articles in the analysis and critical theory of music technologies. Editor of many publications, including the Italian translation of books such as "Heidegger, Hölderlin & John Cage" by M.Eldred (Semar, 2000) and "Universi del suono" by Iannis Xenakis (LIM/Ricordi, 2003). Materials and more info.