The names of the musicians who will perform the selected scores will be announced in March.

U.S.O. Project, composer Daniele Corsi and the musicians chosen to perform during the concert will form the reading panel which will evaluate the submitted works.

The selected works will be announced in September 2012.

For more information or clarifications, please contact us at our e-mail:

submissions at synesthesiarecordings dot com

(*) Clarification:

We can only accepts patches made with Kyma or Max/MSP due to the fact that we are not able to provide financial support for the two selected composers. To avoid any technical issues, we had chosen the platforms we already use in our daily work, with which we are very familiar.
If the selected composers can confirm their presence for the evening concert at their own expenses, then any piece of software/hardware can be used to perform live electronics.
Thank you for your comprehension.

Sunday, December 04, 2011

On Sunday, Sept. 6, 2009 George Lucas presented John Lasseter, Andrew Stanton, Brad Bird, Pete Docter and Lee Unkrich with the Golden Lion for Lifetime Achievement at Venice Film Festival. This is the first time in the Festival’s history that the Lifetime Achievement award has gone not to an individual filmmaker, but to a team of filmmakers. The next day, the Festival hosted a Pixar Animation Master Class on storytelling: during this rare and exciting panel dedicated to the Pixar filmmaking process, the directors discussed the various aspects of bringing their stories to life.

Animation is the most collaborative among art forms; one of the things that is very special about Pixar is the collaboration, never has a studio been so collaborative.
At Pixar we make films to entertain our audiences, all across the world, both children and adults. We make movies for ourselves, the kind of movies we would like to watch. We all have kids, we all love movies and we all love animation and this is what we wanted to do from the beginning as filmmakers.
We (the creative Brain Trust) get together to help each director to make the movie the best it can be, and the director knows that. And so he takes the notes that make the movie better.
One of the things I remember more clearly of when I was a very new animator at Walt Disney, is a great professional called Ollie Johnston, a fabulous animator. I was not a great drawing person, my students were a lot better than me. At that time, when I was struggling drawing in a scene, he took my stack of papers and started flipping it. And I thought he was going to start talking about the drawing. But he turned to me and he simply asked: “What is the character thinking”? That simple statement from this guy hit me so strongly, that it became kind of a foundation of everything I have done after that point.
When you look for “animation” into the dictionary, one of the definitions is “to give life to”. The thing that I have always loved about animation is creating life. In our films the animator and the animation are the act. An animator must animate a character so that every single movement appears driven by that character thought process. That is when it becomes a thinking character.
You are moved by these characters, you believe in these characters. All the meticulous and hard work should be completely invisible. We wanted the audience to be involved in the scene and not to think that Nemo is a bunch of computer layouts. We do not want you to think about the many hours that took to create that scene. You are just carried away by the scene and every focus in every step of the production that we do at Pixar is about the story. It is about entertaining the audience.
We are constantly changing and growing, trying to make things differently. One of the things I stress in animation, is that you have to show your stuff really early: every single morning each animator shows his stuff so we can be sure they are on the right track. I also get inspired by what the animators do without telling them to. In Pixar we have a room with a video camera and mirrors so they can actually try out the acting of the character.
Walt Disney once said: “for every laugh there should be a tear.” This is the foundation of every Pixar film. This is not about animation, it’s about making films. I say something to my sons: “I want you to choose a profession that you love, because if you do you’ll never lose a day in your life.” And this is true for everybody, we work so hard, many long hours, but we so dearly love what we do.
Now we are going step by step through the entire process on how we make a movie, on how we develop stories and on how we can tell a story visually.

Andrew Stanton

In Pixar there is no politics, we are all employees, we do not have agents, no creative executives, no deal-making. We are very similar to the old studio systems where the artists work under one group on multiple projects: I’d like to call it a film school without the teacher. It’s a filmmaker driven studio, we invest in people. As John has previously described, we have a creative Brain Trust where the directors get together sort or like doctors conferring on another doctor’s operation. And we also have in-house original ideas, based either on ideas that the directors had, or stories the directors like or want to invest in. We pick one idea and we hammer on it. Again and again, until it finally is good enough to show other people.
First of all it is not just for kids: basically we make the movies we would like to see, we are film-goers first and filmmakers second.
We work hard on making movies as original as we can, in other words we think more like our own audience, what I’d like to see if I were the audience who wants to enjoy themselves as much as they did on the last film, but every time in a new way.
Being stupid is only possible if you are in a creative estate environment. So the truth is that we are not good at getting it right at Pixar, but we are really good at getting it together when we are getting it wrong in fixing our mistakes. And I think that’s really where our specialty lies. And the variable is that we have discovered something fresh from having made those mistakes. So we don’t try to avoid it, we embrace the idea to make mistakes.
The script is not the end of developing a story, it is the beginning of it: to me screenwriting is not writing, it is an intermediary form, it’s a way of passing all the ideas in your head to the screen. It’s also what I’d like to call cinematic dictation. You’re just basically notating what you have seen in your head, when you are trying to catch the visual aspect of something.
We just wanna tell 2 + 2 and let the audience decide on what 4 is. And that’s really the way to construct anything. The way dialogue is done, the way actors act. The way scenes are put together. I think you can apply it to any aspect of film-making. It is all about audience participation. Movie-making is all about manipulation, but it is only truly successful if your audience has no idea that they-re being manipulated. In most cases 2 + 2 gives you a greater sum than 4.
How you tell the story becomes as much important as the story itself. Even a great joke can be murdered by a bad joke teller. So the joke and the joke telling are equally important.
Personally I think you have to develop both characters and plot simultaneously. Plotting to me is your means of discovering the character. Then once you have found what that character is, you have to link them together, one begets the other.

Brad Bird

If we are going to talk about breaking the rules, we have to know what the rules are. And this is what storyboarding was traditionally used for, it was used to work out business for the characters, what the characters were doing.
I got my first opportunity to do my first feature film, it was Iron Giant. I had a third of the money and a third of the time I had for the other animated films of our competitors. So I dealt with something unfamiliar with animation, this was Warner Brothers, not Disney. They had limited power with visual imagination, so we had to find a quick and relatively cheap way of getting closer to what we were envisioning. We spent a larger percentage of our budget on storyboarding, because it was the cheapest place to make the stage. We couldn’t afford to do anything else, we got to look and know exactly what we were doing before we did it.
When I came on to Pixar to do The Incredibles, we had a different sort of problem. It was to enlarge the scope without enlarging the resources. I had way more resources than I had for Iron Giant, but the scope of the film was so huge.The Incredibles was a fast cutting film, large location to a large shot camera, the only way to wrap our heads around it was to figure it out early. The downside of that is that you’re kind of locked in. The upside of it is that you know much more about the rhythm of the film and the specific needs of each shot.
Film is alive, it is a medium, it is a dream medium and dreams are unpredictable. There is a difference between sleeping and dreaming. One is active and one is throwing yourself off balance. Don’t take anything for granted: it’s a way to keep your company, your movies and yourself alive.

Pete Docter

Everything you see on the screen has an emotional charge. Characters, locations and the props in there exist for very specific story reasons. This is the primary job of the set designer. The first question a production designer should ask himself is: “what is the story about emotionally”?
Designing for computer animation has no accidents, everything you see has to be intentionally designed, built, shaded and lighted.
Film-making is discovery. We do not just start drawing, we do a lot of research. The sense of authenticity really helped us capture an emotion. We wanted to look authentic.
The primary job of the lighting is to direct the eye. There is a whole lot of tricks to make sure people are looking where we want. The second job of lighting is emotion. And John (Lasseter) says that light is much like music and can really emotionally affect the eye. We are all trying to get inside the head of the character. Use light to represent how that character is feeling.
Lighting really works the same way it does in live action, so you have lights everywhere to get the final shot. Lighting is not left to chance, everything is planned for an impact. The quality of work is due to the fact that we are working with some of the best professionals in the world.
The ultimate goal of lighting is to make the audience feel something.

The Pixar exhibit that was originally displayed in 2005-2006 at the Museum of Modern Art in New York City has been touring the world. Now, it's running from November 23rd 2011 to February 14th, 2012 at PAC (Padiglione d'Arte Contemporanea), Milan (Italy).

Sonic Screens aims to render the endless possibilities of life and its surroundings experienceable in our conscious activity, trying to deal with the possible infinites of the listening experience, both in their objective and manufactured dimensions.
This journey related to the pure immersive listening will take advantage of Ambisonics sound diffusion practice, creating an immersive sound flow between different electroacoustic works by these selected international artists:

Milan/Rome based Matteo Milani and Federico Placidi (aka U.S.O. Project - Unidentified Sound Object) are sound artists whose work spans from digital music to electroacoustic improvisation. Unidentified Sound Object was born from the desire to discover new paths and non-linear narrative strategies in both aural and visual domains. The project includes several collaborations with visual artists and performers. Milani and Placidi are the co-founders of the label "Synesthesia Recordings", a repository of electroacoustic works. U.S.O. Project is a continuing evolving organism.

Franz Rosati is a sound and media artist, focusing his research on real-time A/V, Visual Music projects and installations following an aesthetic idea based on discontinuity of aural and visual patterns avoiding any kind of repetition through the use of chaos mathematics, generative and stochastic processes. He uses his own custom made software for real-time micro-montage and sound elaboration in the microscopic time scale to realize compositions and performances based on aural and visual matter’s costant metamorphosis. During the years, Franz Rosati has played in a large number of electroacoustic projects such as Franco Ferguson improvisors collective, Meccanica Ferma, Solderwire, GRIDSHAPE, developing his own approach to electroacoustic improvisation. In 2007 founded Nephogram [contemporary documents] collectives with Stefano Pala a.k.a. UKQWJB. He also teaches MaxMSP/Jitter for sound design, interactivity and multimedia, focusing in computer vision techniques in several Workshops and Art/Design Institutes, and developed Interactive Examples for Electronic Music and Sound Design, a book about sound theory and practice in MaxMSP.

The first day of the seminar focused on post-production, while the second day concentrated on professional broadcasting and particularly the radical changes happening in production and distribution of broadcast, film and music as a consequence of new ITU and EBU standards.

The groundbreaking and comprehensive EBU R128 loudness recommendation was investigated from a multitude of angles, as was the just updated ITU-R BS.1770-2 broadcast standard.

Tc Electronic documented the event and on this page they have gathered some of the footage form the seminar. First up are Florian Camerer and Richard van Everdingen from EBU’s PLOUD Group.

Florian provides a general overview of EBU’s R128 broadcast standard presented with great enthusiasm and a twist of humor. New videos will be available soon!

Monday, August 01, 2011

"Listening to the environment, contextualizing it objectively and creatively has always been a priority of the work of U.S.O. Project.

Free from any pseudo-environmental or socio-political implication, the continuous work on sampling, processing and transfiguration of found sound and carefully preserved in memory of a digital recorder, has always played a central role in our compositional practices.

U.S.O. defines Soundscape as the expressive and narrative richness that comes from the reciprocal and continuous interaction of multiple sound sources from the real world, and other phenomena which are perceptible and measurable only through proper and adequate transduction (electromagnetic signals, for example).

A Soundscape is also an opportunity for reflection and imagination that has little to share with the real world.

A Soundscape can be a place of the mind, a reminiscence of a future experienced in dreams, lands far away in space and time.

We try to deal with the possible infinites of the listening experience, both in their objective and manufactured dimensions.

We believe this represents our primary objective, to render the endless possibilities of life and its surroundings, sensible and experienceable in our conscious activity."

Objectives of the Workshop:

Students will be encouraged to actively listen to the rich sound world around them. Their imagination and creativity will be stimulated, and at the end of the course they will create a sound composition using the world around them as a musical instrument.

The workshop offers an introduction to theory and practice as follows:

An historical perspective on the Soundscape Composition.

Tools and methodologies for field recordings.

The listening practice to analyze the characteristics of ambient sound.

A practical approach to the transformation of sound.

Composition with three-dimensional space. The Ambisonics format.

Tools and Advanced spatialization Techniques.

Target

Anyone interested in the Soundscape Composition and sound field experimentation.

Structure of the course

1) Introduction

Brief history of soundscape composition, with examples drawn from the work of contemporary artists who incorporate the practice of Field Recording in their compositions.

Introduction to the concept of "soundscape".

2) Tools

Field Recordist's Tools: microphones and recorders.

Conventional microphones techniques and creative miking.

3) Sound Transformation

Manipulation and digital signal processing techniques (VST+AU).

Filtering.

Convolution.

Granular Timeshifting.

PhaseVocoder.

4) Composing in Space

Multichannel Surround Sound.

HOA (Higher-Order Ambisonics).

Tools for encoding and decoding.

At the end of the four-day workshop, participants will develop a short sound composition (about 6 min.) using the concepts and techniques acquired during the various modules.

Duration:

4 days / 17-18, 24-25 September 2011
10 a.m. to 7 p.m.

Costs and subscription

To participate in the workshop you must register no later than 10th September 2011. The fee is € 100.

Listening paths and outdoor recordings. The participants will take part in a "sound walk" to "collect" their
point of view of a soundscape with their digital portable recorders, as
defined by a preplanned route. Editing sessions of the collected material.

Requirements:

It is recommended that participants bring their own laptop, a portable recorder and headphones.
The number is limited to 14 participants.

Thursday, July 28, 2011

Hologram Room is the first bundle of the abstract Sound Design Collection produced by sound designers and composers Matteo Milani and Federico Placidi (aka U.S.O. Project).

These two gigabytes of “ready to use” original sound elements are designed to help you sweetening and enhancing your sound production. The whole library is organized in eight main folders: Active Drones, Alarms, Blips, Buttons, Communications, Ignitions, Telemetries, Transitions. It provides a selection of out of this world drones and ambiences, futuristic sound effects and electronic tools.

We have been spending hours composing, editing, mixing these categories in Symbolic Sound Corporation Kyma and Avid Pro Tools. All of the audio files have been embedded with metadata for detailed and accurate searches in your asset management software.

A note about the mastering: the library has not been peak normalized, but loudness normalized, based on the recommendation by the European Broadcast Union. What does it mean? During the audition of your samples, they will have the same loudness level when played through monitors.

This work has been made possible by the aid of LevelOne, a program developed by Grimm Audio.

EBU TECHNICAL provides all kinds of information about the EBU R128 loudness recommendation. The official R128 documents and guidelines can be found online, as well as introduction papers and videos.

Friday, June 24, 2011

Gary Rydstrom was born in 1959 in Chicago, IL. He graduated from the University of Southern California - School of Cinematic Arts in 1981. He began his career at Sprocket Systems, formerly Skywalker Sound, in 1983. Offered the job by a college professor, Gary received the opportunity to work with his mentor, Star Wars sound designer Ben Burtt. He created sound for numerous successful films including Backdraft, Terminator 2: Judgment Day, Jurassic Park, Titanic, Saving Private Ryan, Minority Report and Finding Nemo. Through this work he has won 7 Academy Awards.
Rydstrom did his first work for Pixar on the short film Luxo Jr.. John Lasseter has said it was Rydstrom's work on Luxo Jr., such as creating the lamp's voice from the squeak of a lightbulb being screwed in, that taught him how sound can be a partner in the storytelling of a film. In 2006 he has made his directorial debut with the Pixar animated short Lifted. He recently jumped again into the director's chair to create his second animated short Hawaiian Vacation, set to play in front of Cars 2.

For every director and for all of us the goal is at any one moment in a movie to have the audience on the edge of their feet and can't wait to see what happens next. You are moved by these characters, you believe in these characters. All the meticulous and hard work should be completely invisible. We wanted to be involved in the scene and we don’t you to think that Nemo is a bunch of computer layout. We do not want you to think the many hours that took to create that scene. You are just carried away by the scene and every focus in every step of the production that we do at Pixar is about the story. It is about entertaining the audience. - John Lasseter, 66th Venice Film Festival

USO: Gary, you’re the third person – of our personal “respect-list” – who came from a sound career and crossed over to a director’s chair: Walter Murch did it first, Ben Burtt came after him. They still design their own sound for their works, just as you did for your first short movie Lifted (with a tribute like the Wilhelm Scream at the very end). How did you manage the transition from sound designer/re-recording mixer to director? How about your feelings?

Gary Rydstrom: Working in film sound is a great way to immerse yourself in the rhythms of a movie. Having done sound for so many movies, and so many different kinds of movies, I’m hoping I’ve developed some mysterious “film instinct.” I think the best directors – and best sound designers – work more from their gut than their brain. In other words, to make movies, your gut has to be bigger than your brain… or maybe I should rephrase that?

GR: "It was a little scary but I always wanted to make films and wanted to have opportunities to write and tell stories. So Pixar being old friends, they gave me this opportunity to come over there and do that. So it was a little scary because there were a lot of things I had to learn. But doing sound for so many years, what I found was that the big similarity is that sound is all about rhythm. It’s about using sound, rhythm helps delineate sound effects, sound tracks, and telling a story with these kind of rhythms is really key. Animation is really about rhythms and timing so I think working in sound gave me a great sense of timing. In fact, on Lifted I used, before animation had been started, used a temp soundtrack to express the timing that I wanted to the animators so they had some reference of what I was after. It was a way for me to even communicate to the animators." - CanMag

USO: Did you develop and learn your new craft in sound room working movie by movie?GR: Central to sound design is finding what feels right for a movie, what matches its look and heart. So every movie I worked on taught me different lessons. I kept thinking I was close to end of my lessons, but they never ended. Turns out that’s what makes working in film fun. It’s never the same twice.USO: Being close to first-class directors during all your sound career has thought you well?

GR: I felt like a spy, watching a lot of directors. I’ve been very lucky to have worked with many of the best – Spielberg, Cameron, Lasseter, Redford – and you probably don’t need me to tell you (their movies do) how different in approach they were. One thing the best have in common: they inspire their crews.

GR: "... I've had this long, great relationship with John Lasseter and Pixar. I've felt involved throughout the whole filmmaking process on their films. They offered me an opportunity to develop and direct films, maybe because I bring an outsider's perspective while still being a Pixar guy through and through.My friends there know that I've had a long-standing love of comedy. When I first told Steven Spielberg and George Lucas that I was doing this, they were touchingly supportive and generous with advice. I'm grateful for my sound career. It gave me the equivalent of 50-yard-line seats, second row, during a fascinating era in film history." - Mix MagazineUSO: Can you describe what you were feeling when you left the Technical Building (where you spent almost 20 years), and all your friends at Lucas Skywalker Sound, to your new life/experience in Emeryville?

GR: Luckily I get back to Skywalker Sound quite often, otherwise I’d REALLY miss my old friends and career. Pixar’s no slouch, but there isn’t a more beautiful place to work than Skywalker Ranch.

I've known Gary for over 20 years and he's really been a mentor and a role model for me, as to how to do this work. His standard for quality, inventiveness and humor is really always in the back of my mind when I work. No one is a quicker study when you break down a scene or a film in terms of what, soundwise, is the best direction to go to serve the story. - Tom Myers, Skywalker Sound

USO: What does it mean for you to associate a particular sound to a visual event (identifying it in a vast catalogue as big as the sound library of SkySound)? What are the mental or purely instinctive paths competing in making the choice?

GR: Something magical happens when a sound effect is added to picture – and it’s not predictable. After all my years of doing it, I still depend on experimenting, putting sounds against image and seeing what happens. First time I did this, as a film student, it amazed me how sound could “open up” a movie, how the combination of sound and visual could create something greater than the sum parts. Having a great sound library is essential, but the real secret is how one uses it.

GR: "I wanted to give the lamps in Luxo Jr. character through sound. I told John (Lasseter) that I'd come up with these voices. He'd never imagined they'd have voices and was wary of the idea. But I experimented with taking real sounds — a lot of it as simple as unscrewing a light bulb or scraping metal. Every once in a while, a sound would be produced that would remind you of sadness or glee. I always think of sound design being like prospecting for gold. Start by, say, goofing around, making lots of sounds, then find the one percent that has something interesting about it. Put this against the film, and there's a magical moment when the sound, if it's right, merges into the image, brings it to life. They were not cartoon-y. They were fun, reality-based sounds. It felt like the birth of something new, even then." - Mix Magazine

USO: Many sound artists working in other domains like electroacoustic music, musique concrete, environmental sounds has been strongly influenced by the contemporary cinema, and by its ability to create stories with the help of sound effects, soundscapes and even to define the personality of some "objects" with unmistakable sounds. How it is possible to interrelate these multiplicity of experience?

GR: What all sound artists share is a desire to convey emotions, so I certainly was inspired by non-movie sound work. Sound is emotion. Not just music, but all sound. Humans (who can hear) seem to take sound in general for granted, which is frustrating, but liberating. How manipulative can we be when no one’s paying attention!

USO: While Europe was experiencing electronics and sound design like natural language consequence of the avant-garde of 50's, in the U.S. this journey took place independently in the field of the film industry with other purposes and objectives. You were aware of what was happening in the old continent and its experiments?

GR: I’m certainly aware of the European traditional of producing “soundscapes” (for lack of a better word) for radio. In some ways, I was jealous of sound work that didn’t depend on the visual – how free it seemed! Best I could do was build an “off-screen” world in film. But I always had a movie to be influenced by – how scary to have sound work stand on its own.

USO: You've been one of the most assiduous "regulars" of the Synclavier, an instrument widely used in various musical fields. Can you describe the creative approach with this tool? What were your procedures? What made it such as a instrument so unique? To create the sound of the engines of the Titanic you and Chris Boyes worked long on the Synclavier to reproduce the effect in question. Do you remember how you did it?

GR: I fell in love with the Synclavier early in my career because it was such a powerful instrument for shaping natural sounds. I never used the FM synthesis – even making electronic sound effects I would try to use natural sounds, just because I find real sounds are more interesting. Sampling sounds and putting them on a keyboard allowed me to quickly experiment with pitch and layering, which are my primary tools for bending real sounds to my will. The Titanic engine sounds took advantage of how the Synclavier could speed up and slow down a sound pattern. For me, nothing beats using a “musical” instrument for creating sound effects.

GR: "The idea of using a sampler for sound effects work had astonishing potential. With sampled sounds in RAM, you can instantly pitch-bend it and layer it and play it and shape it, without using any processing time. You can layer on the same key and very finely manipulate the pitch and delay and merge them together in ways that were harder to do in the tape-to-tape days. It allowed me to create the dinosaurs in Jurassic Park, in which I took several layers and blended different animal sounds into what sounds like one animal. With the Synclavier, I have a library of sound “parts,” little snippets that are like phonemes in language. Interesting bits of sound that can be rearranged in multitudes of ways. It's a library of raw material, and it's valuable still." - Mix Magazine

USO: How about your relationship with Ben Burtt along your sound career and especially now in Pixar? I mean crosstalk and dialogue, to share ideas. Could you describe him as a mentor, a friend, a colleague, a druid?

GR: All of my approaches and philosophies about sound come from Ben Burtt. At the time he was revolutionizing sound design, I was lucky enough to get a job at Sprocket Systems and see how he did it, how much he knew about film sound history, and most importantly how much he cared about using sound to tell a story. When I started working on Jurassic Park, I remember Ben was away on vacation. I stumbled along until he came back – for all sorts of reasons, psychological and practical, I couldn’t get started until he was in the building.

[Gary Rydstrom Talks About Cinema Sound - via DolbyInsider]

USO: Nowadays many sound designers are working in nonlinear environments like Pro Tools, Nuendo with tons of plug-ins to make everything. One thing we always admired in your work is the exquisite and unfailing "organic" sound and his innate musicality. It seems that sounds "exist" in nature and are not a product of a skilled craftsman. Would you like to talk about that?

GR: There’s a danger in processing sound too much. I believe the best sound effects come from the best raw recordings, and are tweaked as little as possible. The world is so full of amazing sounds – sounds no synthesizer can match – so why not find them and use them?

GR: [...] I remember a scene in the first Mission Impossible in which Tom Cruise breaks into a computer room at the CIA, for which we’d added all these sound details for equipment he was using to lower himself in. Yet the idea was that if he made any sound over a certain level, he would trip the alarm. Brian De Palma ultimately said, “No, take it all out.” And for the most part, that scene plays with nothing on the track. I went to see it with an audience and it had the desired effect: It made everyone lean in, pay closer attention, get nervous. Tension comes from the silence of that scene. [...] Silence can be thought of as a type of sound. It’s like when somebody years ago figured out that zero was a number. And silence is just as valid as an amazing sound. Every sound editor can’t help but think of how to fill up a track; it’s what we’re paid for. - excerpts from “From here on in, absolute silence.” [via Benjamin Wright]

U.S.O. Project is pleased to invite submissions of fixed media sound works for the second edition of “Sonic Screens”, a journey among different electroacoustic Soundscape compositions.

Sonic Screens is an annual event that will take place during two acousmatic evenings in Milan during Fall 2011.

Sonic Screens aims to render the endless possibilities of life and its surroundings experienceable in our conscious activity, trying to deal with the possible infinites of the listening experience, both in their objective and manufactured dimensions.

“Listening to the environment, contextualizing it objectively and creatively has always been a priority of the work of U.S.O. Project.

Free from any pseudo-environmental or socio-political implication, the continuous work on sampling, processing and transfiguration of found sound and carefully preserved in memory of a digital recorder, has always played a central role in our compositional practices.

U.S.O. defines Soundscape as the expressive and narrative richness that comes from the reciprocal and continuous interaction of multiple sound sources from the real world, and other phenomena which are perceptible and measurable only through proper and adequate transduction (electromagnetic signals, for example).

A Soundscape is also an opportunity for reflection and imagination that has little to share with the real world.

A Soundscape can be a place of the mind, a reminiscence of a future experienced in dreams, lands far away in space and time.” – Matteo Milani & Federico Placidi

Composers and sound artists are invited to submit multichannel works, up to 8 channels. The assignment of channels to speakers must be clearly indicated in the submission. Works of any duration will be considered although pieces of under 16 minutes will be given preference.

The performance will take advantage of Ambisonics sound diffusion practice, creating an immersive and uninterrupted sound flow between different works from selected international artists.

The recordings of the concerts will be available for streaming and released in binaural format for headphone use. The ownership of the tracks remains to the authors.

Submissions need to include:

a stereo version of the piece

individual mono files for each channel

channel configuration

sample rate

program notes

brief biography

While the composers of the selected works are encouraged to attend the event, attendance is not required for a work to be presented.

There is no registration fee.

The deadline for submission of works is October 31st, 2011.

Material Submissions

Please send download links to your work using one of the many file delivery services (yousendit.com, sendspace.com, gigasize.com, wetransfer.com, etc) in .zip or .rar format. Please do not email file attachments.

“The essential difference between an electroacoustic composition that uses pre-recorded environmental sound as its source material, and a work that can be called a soundscape composition, is that in the former, the sound loses all or most of its environmental context. In fact, even its original identity is frequently lost through the extensive manipulation it has undergone, and the listener may not recognise the source unless so informed by the composer. In the soundscape composition, on the other hand, it is precisely the environmental context that is preserved, enhanced and exploited by the composer.” – Barry Truax

Ben Burtt's latest film as Sound Designer, "Super 8" - written and directed by J.J. Abrams - opens June 10th.

J.J. Abrams: “Ben Burtt did the sound design, and he brought with him one day a copy of this Super 8 film that he made when he was a teenager, that was about a train wreck, a WW2 film, and it so was much like what happened in this movie, it was uncanny. I was jealous, wishing I had a train wreck to go to when I was a kid.”

Monday, May 30, 2011

Observation’s Pod is a new section on Synesthesia Recordings where we post most of our researches’ output to the collective.This place works as a permanent Laboratory where the product of our creative and experimental activity with sound is freely opened to the public in its raw form.

These three small works are based on the improvisational exploration of a specific configuration of the modules of Serge Modular synthesizer.

The synthesis model which was implemented is that of the Complex Feedback Frequency Modulation as shown in the artwork image: two oscillators recursively modulating that build a dynamic non-linear system exhibiting a chaotic behaviour.

In order to obtain a high timbral complexity, the waveforms generated by each oscillator are dynamically varied through the use of waveshaping modules.

All the material was created using only the patch described above, without any filter or other editing/mixing procedure.

The three short works are created on order to intuitively explore a dynamic system, while combining its output using an analogy with three well-defined poetic abstractions.

Tuesday, May 17, 2011

GRM Tools is the result of more than 50 years of cutting-edge research and experimentation at the Groupe de Recherches Musicales de l'Institut National de l'Audiovisuel in Paris.

These plug-ins were realized by a succession of hardware and software engineers, who formulated the algorithms for the original GRM Tools in the 1990s. Over the years the GRM has focused on developing a range of innovative tools to treat and represent the sound.

The new GRM Tools Evolution is the latest powerful and imaginative bundle of new algorithms for sound processing. Three new instruments are available: Evolution, Fusion and Grinder. All works in the frequency-domain and provide powerful ways to manipulate audio in real time. I had the privilege of interviewing Emmanuel Favreau, software developer at INA - GRM. Here we go!

Matteo Milani: How many people are part of the GRM development team at INA?

Emmanuel Favreau: We are two people, working full-time. Adrien Lefevre handles the Acousmographe. I’m on GRM Tools. We welcome regular students.

MM: Can you tell us a brief history of the GRM Tools from the origin until now?

EF: The first version of the GRM Tools was created by Hugues Vinet, who is now scientific director of IRCAM in Paris. This stand-alone version offered a couple of algorithms, using the Digidesign SoundAccelerator/Audiomedia III card. The user interface was made ​​with HyperCard. When I arrived at the GRM in 1994, we took the decision to convert the processing available in the stand-alone version of GRM Tools plugins to TDM for Digidesign Pro Tools III. Treatments were rearranged, some modified, others abandoned. The original GRM Tools Classic bundle dates from this era. Later, the evolution of treatments has been closely following the technological evolution: when the processors became powerful enough for real-time processing, Steinberg introduced the VST architecture and the Digidesign RTAS Pro Tools format. And finally, we developed the ST version - Spectral Transform - when computer processing power allowed us to calculate several simultaneous FFT in real time.

[...] Jean-Francois Allouis and Denis Valette pioneered the hardware development of SYTER (SYsteme TEmps Reel / Realtime System) with a series of prototypes produced during the late 1970s, leading in due course to the construction of a complete preproduction version in 1984. Commercial manufacture of this digital synthesizer commenced in 1985, and by the end of the decade a number of these systems had been sold to academic institutions.

Benedict Mailliard developed the original software for SYTER. By the end of the decade, however, it was becoming clear that the processing power of personal computers was escalating at such a rate that many of the SYTER functions could now be run in real-time using a purely software-driven environment. As a result, a selection of these were modified by Hughes Vinet to create a suite of stand-alone signal processing programs. Finally, in 1993, the commercial version of this software, GRM Tools, was released for use with the Apple Macintosh.

The prototypes for SYTER accommodated both synthesis and signal processing facilities, and additive synthesis facilities were retained for the hardware production versions of the system. The aims and objectives of GRM, however, were geared very much toward the processing of naturally generated source material. As a consequence, particular attention was paid to the development of signal processing tools, not only in terms of conventional filtering and reverberation facilities but also more novel techniques such as pitch shifting and time stretching.

MM: About GUI - 2DController. What is the origin of this pioneering, intuitive, but simple performer-instrument "link"?

EF: This type of interface has been widely used at the time of SYTER during the 80’s. It allowed us to regain "analog" access to a digital instrument. Indeed, even the manipulation of a slider with a mouse requires some attention (click in the right place, moving vertically or horizontally without mechanical guide, etc.). With the 2D interface, the entire surface of the screen becomes a controller. To obtain a result as soon as you click, the precision of movement is becoming necessary if you want to tune that.

MM: The mapping of parameters on multi-touch control surfaces free us from the use of a mouse and gives us an expressiveness never achieved before. What do you think of this new generation of controllers?

EF: Of course, these interfaces allow an overall and "analog" control which is not possible with the mouse (although the knob 2D mode or "elastic" are possible solutions to overcome the single pointer limitation). Since the engineering of the SYTER we proposed a system of "interpolator balls" to interpolate between different set of parameters arranged in a two-dimensional space. The multi-point control of such a device is natural: we need both hands to shape and transform the space.

EF: No, SYTER no longer works. It was composed of several elements (a PDP-11, large hard drives, a vector graphics terminal) which can not be sustained today.

MM: Host-based tools vs. custom DSP engines: will there be a winner or they will continue to peacefully coexist in the business?

EF: For the type of tool that we develop, the winner is clearly the host-based. For very large sessions with dozens of tracks and hundreds of plug-ins, DSP is now the best choice, but they could disappear with the diffusion of multi-core processors.

MM: How long did the Classic Bundle take to get ported from TDM to RTAS?

EF: It's hard to say because it was not done directly. I first made ​​the VST version, and then adapted the RTAS version. The algorithmic part posed no particular problems, the difficulties being rather on the side of the interface between the various plugins and hosts.

MM: How much research was needed to create the Spectral Transform bundle?

EF: The prototypes of the Spectral Transform have been fast enough to achieve. The basic algorithm is the phase vocoder, which has been well known for a long time. What took time was the interface design, the choice of parameters and their mutual consistency, stability and the whole robustness (i.e. avoid audio clicks and saturation of the values ​​of some parameters).

MM: What's the technology behind the bundles?

EF: If we leave aside the TDM - the processing code is written in 56000 assembly language, all plugins are written in C++. The processing codes are fully compatible between Mac and PC. In addition, the portability of the user interface is guaranteed by Juce. All development is done on Mac; PC adaptation is virtually automatic and requires minimal work.

MM: A description of version 3 and its new features: what goals have you achieved during this long period of software development?

EF: Having redesigned the interface and rewritten all the code allowed us to add some new features: resizing the window, MIDI control with automatic learning, agitation mode.

Agitation is a generalization of the Randomize, it can be applied to all parameters of random variations in amplitude and frequency control. Now all the GRM Tools are also available as standalone applications. This easily handles individual sounds, to make quick tests and become familiar with the treatments without having to use host daw and sequencers.

MM: How do you manage feedback from musicians and sound designers to improve sound quality and the graphical interface?

EF: The user feedback comes from various forums and from discussions with users and composers here at the GRM. In response to suggestions, plug-ins will be changed, some features will be added (but always in small numbers to ensure compatibility) or it will create a new treatment that may ultimately prove quite different from the original application. This is what happened to Evolution that comes from improving the freeze that can be achieved with FreqWarp.

MM: What are the most efficient methods of applications against piracy?

EF: There is none. Whatever the methods, they will be bypassed one day or another. We must find a solution that is not too heavy for the users, while allowing a minimum of protection. We chose the system of Pace iLok because it is very common in musical applications. The recently announced changes should make it more flexible to use.

Thanks for your time Emmanuel, keep up the good work!

[...] Any transformation, no matter how powerful, will never equal or surpass synthesis, if it fails to maintain a causal relationship between the sound resulting from the transformation and the source sound. The practice of sound transformation is not to create a new sound of some type by a fortunate or haphazard modification of a source, but to generate families of correlated sounds, revealing persistent strings of properties, and to compare them with the altered or disappeared properties.

In synthesis, the formalisation of the devices and resulting memorisable abstraction, offer a stable set of references which can be easily transposed from one environment to another. In sound transformation, no abstraction of the available results is possible and neither is generalisation. The result of an experiment is always the product of an operation and a particular sound to which this operation is applied. The composer must be able to add to the sum of knowledge by reproducing a previously proven experiment.

What makes the wealth and functionality of a system is the assembly and convergence of the whole, its ability at any moment to answer the questions imagined. Specific tools built for a single experiment, no matter how prestigious, are sterile if they cannot be applied to other purposes. - Yann Geslin

Monday, May 09, 2011

Between 1967 and 1969 Gottfried Michael Koenig devoted himself to compose electronic music, producing a series of works entitled Funktionen.

The instrument that inspired and made ​​possible these compositions was the Variable Function Generator, designed by Stan Tempelaars at the Institute of Sonology, Utrecht.

Koenig used the VFG not only to produce the basic sounds (waveforms), but employed it as a modulator and control instrument in order to dynamically manipulate the elaboration processes which were carried out on materials (ring modulation, volume curves, filtering and reverberation).

The idea behind the experiment was to entirely produce the sound material and its structural implementation using only the VFG (this led to the creation of Funktion Grün, Funktion Gelb, Funktion Orange, Funktion Rot).

For a detailed analysis of Gottfried Michael Koenig’s Funktionen, please see the document on his official website:

The works presented in U.S.O. Project’s Functions explicitly refer to a series of works that with an extraordinary vision Koenig realized in those years.

The main challenge was both philological and aesthetical. The idea was to create an automated composition by exploiting the computing power of modern computers and by a sufficiently widespread and flexible software in order to re-program the original algorithms.

The Patches used in the prototyping of the generative software environment were assembled using a specially written program that provided in text format - using serial procedures - how the various modules should be combined with each other - i.e.:

In order to manage all the modules in parallel, plus the sends to the reverberation units and so forth, we constructed a matrix, that automatically reconfigures itself according to strict procedures based on serial techniques:

The implemented automata procedures have in fact "created" the composition itself.
In the end, the multichannel final master was obtained with Kyma/Pacarana’s surround Objects.

As you can deduce from a listening comparing the work of Koenig and U.S.O. Project, there are many differences, both in the aesthetic and formal domain.

It was clear to us since the beginning that we didn’t want to repeat Koenig’s compositional experiment in every detail, but to build - and then understand - something new produced using the same modus operandi that convinced him to make those works. At the same time we wanted to preserve an historical legacy with those works (something that is easily recognizable especially in the first piece). It was also interesting to us to empirically verify the effectiveness and efficiency of the procedures in terms of timbre and formal development using the serial approach.

The actual distributed version is rendered using U.S.O. Project’s custom binaural techniques for headphone listening only.

Beyond any reference to Koenig’s original works, Functions is a spontaneous self-reflection about the different states of sound matter and the exploitation of its possible configurations, shaped and imagined through a dialogical process between the machine and its operator.