Friday, September 25, 2009

Composer and computer scientist, Marco Stroppa is an artist for whom musical invention is inseparable from the exploration of new scientific and technological arenas. His series of works for solo instrument and chamber electronics offers the opportunity to explore novel methods for projecting sound in the concert hall and renew the computer paradigms that regulate the relationship between the worlds of instrumental and synthesized sounds.

The seventh short film in the "Images of a Work" series endeavors to understand his work through excerpts of rehearsals and interviews with the artists in the IRCAM studios.

Air features four ‘Conduct’ modes, which let the user control the composition by tapping different areas on the display, and three ‘Listen’ modes, which provide a choice of arrangement. For those fortunate enough to have access to multiple iPhones and speakers, an option has been provided to spread the composition over several players.

"Air is like Music for Airports made endless, which is how I always wanted it to be." - Brian Eno

“About 20 years ago or more I became interested in processes that could produce music which you hadn’t specifically designed. The earliest example of that is wind chimes. If you make a set of wind chimes, you define the envelope within which the music can happen, but you don’t precisely define the way the music works out over time. It’s a way of making music that’s not completely deterministic.” - Brian Eno

Peter Chilvers: I happened across the phrase 'A Marble Calm' on holiday a few years ago, thought it sounded like an interesting band name, then started thinking about the type of band that might be. The more I thought about it, the more it seemed to tie up a number of ideas that were interesting to me: drifting textural ambient pieces, improvisation and song. By making it a loose collective, it's enabled me to bring in other vocalists and musicians I've enjoyed working with on other projects - vocalists Sandra O'Neill (who also worked with me on 'Air' for the iPhone) and Tim Bowness, marimba player Jon Hart and flautist Theo Travis.

MM: When did you start working with generative music?

PC: In the 90's I worked as a software developer on the 'Creatures' series of games. When we started on Creatures 2, I was given the opportunity to take over the whole soundtrack. The game wasn't remotely linear - you spent arbitrary amounts of time in different locations around an artificial world, so I wanted to create a soundtrack that acted more as a landscape. I ended developing a set of 'virtual improvisers', constantly generating an ambient soundscape in the background - it was quite involved actually, with its own simple programming language, although little of that was visible to the user.

[...] Peter chose to use his background in improvised music to create an array of "virtual musicians" that would play along to the action on screen. Each composition in Creatures contains a set of "players", each with their own set of instructions for responding to the mood of the norns on screen.

Peter was able to generate much more interesting effects using recorded instruments rather than using General MIDI sounds generated by a soundcard, which can often be quite restrictive. This meant that he could take advantage of the many different ways that a note on a "live" instrument can be played - for example, on a guitar the sound changes greatly depending on the part of the finger used to strike a string, and on a piano when one note is played, all the other strings vibrate too. Also by altering the stereo effects, he could fatten the sound at certain times.

He also made use of feedback loops within the soundtrack. Feedback loops were first experimented with in the 1970s - if any of you can remember Brian Eno, you may be interested to know he composed most of his music then using this method. The idea is that you play a track and record it into RAM (onto a tape back in the 1970s). After about a short while (around 8 seconds in Creatures 2), the loop starts and the original sounds are played back so the composer carries on creating sounds in response to what's gone before.

Behind the scenes, scripts control the music engine and set the volume, panning and interval between notes as the mood and threat changes.

MM: Why did you choose the Apple platform to develop the applications?

PC: I've been a huge fan of Apple products for a long time, and their timing in releasing the iPhone couldn't have been better. Bloom actually existed in some form before the iPhone SDK was announced - possibly before even the iPhone itself was announced. From the second we tried running the prototype, it was obvious that it really suited a touch screen. And Apple provided one!

The difficulty developers have faced with generative music to date has been the platform. Generative music typically requires a computer, and it's just not that enjoyable to sit at a computer and listen to music. The iPhone changed that - it was portable, powerful and designed to play music.

MM: Who designed the visualizations of Bloom? Eno himself?

PC: It was something of a two way process. I came up with the effect of circles expanding and disappearing as part of a technology experiment - Brian saw it and stopped me making it more complex! Much of the iPhone development has worked that way - one of us would suggest something and the other would filter it, and this process repeats until we end up with something neither of us imagined. Trope, our new iPhone application went through a huge number of iterations, both sonically and visually before we were happy with it.

MM: What kind of algorithms define Bloom's musical structure? Are they specifically based on Brian's requests or just an abstraction based on his previous works?

PC: Again, this is something that went back and forth between us a number of times. As you can see, anything you play is repeated back at you after a delay. But the length of that delay varies in subtle, but complex ways, and keeps the music interesting and eccentric. It's actually deliberately 'wrong' - you can't play exactly in time with something you've already played, and a few people have mistaken this for a bug. Actually, it was a bug at one point - but Brian liked the effect, and we ended up emphasising it. "Honour they error as a hidden intention" is something of a recurring theme in Brian's work.
A forthcoming update to Bloom adds two new 'operation modes', one of which was designed specifically to work with the way Brian prefers playing Bloom.

MM: Does the graphic and audio engine include audio and video standard libraries or you wrote your own classes?

PC: I've built up my own sound engine, which I'm constantly refining and use across all the applications. It went through several fairly substantial rewrites before I found something reliable and reusable.

MM: Is all the code in 'Objective C' or did you use any external application?

PC: It's all Objective-C. I hadn't used the language before, although I'd worked extensively in C++ in the past. It's an odd language to get used to, but I really like it now.

MM: Is Bloom sample based? What is music engine actually controlling (e.g. triggering, volume, panning, effects)? What about the algorithmic side of the music engine?

PC: Bloom is entirely sample based. Brian has a huge library of sounds he's created, which I was curating while we were working on the Spore soundtrack and other projects. It's funny, but the ones I picked were just the first I came across that I thought would suit Bloom. We later went through a large number of alternatives, but those remained the best choices.

The version of Bloom that's currently live uses fixed stereo samples, but an update we're releasing soon applies some panning to the sounds depending on the position of each 'bloom' on screen. It's a subtle effect, but it works rather well.

MM: Would you like to describe your actual and next projects?

PC: I've been involved in two new applications for the iPhone: Trope and Air. Both Apps were intended to be released simultaneously. Trope is my second collaboration with Brian Eno, and takes some of the ideas from Bloom in a slightly different, slightly darker direction. Instead of tapping on the screen, you trace shapes and produce constantly evolving abstract soundscapes.

Air is a collaboration with Irish vocalist Sandra O'Neill, and is quite different to Bloom. It's a generative work centred around Sandra's vocal textures and a slowly changing image. It draws heavily on techniques that Brian has evolved over his many years working on ambient music and installations, as well as a number of the generative ideas we've developed more recently.

I have just had some interesting news: Trope has been approved, it's now available in the App Store!

"Trope is a different emotional experience - more introspective, more atmospheric. It shows that generative music, as one of the newest forms of sonema, can draw on a broad palette of moods." Brian Eno

"[...] I had realised three or four years ago that I wasn't going to be able to do generative music properly – in the sense of giving people generative music systems that they could use themselves – without involving computers. And it kind of stymied me: I hate things on computers and I hate the idea that people have to sit there with a mouse to get a piece of music to work. So then when the iPhone came out I thought: oh good, it's a computer that people carry in their pockets and use their fingers on, so suddenly that was interesting again."- Brian Eno[via timeoutsydney.com.au]

Thursday, September 17, 2009

The Hollywood Post Alliance has announced that Ben Burtt will receive the organization’s Charles S. Swartz Award for Outstanding Contribution in the Field of Post Production, recognizing his powerful artistic impact on the industry. The award will be bestowed on Mr. Burtt on November 12th during the Hollywood Post Alliance Awards gala at the Skirball Center in Los Angeles.

HPA Awards co-founder and committee chair Carolyn Giardina said, “We are thrilled to recognize Ben Burtt with the Charles S. Swartz Award, an honor that represents everything that the HPA stands for; creativity, technical excellence, and limitless thinking. From R2 D2 to WALL-E, Ben Burtt has helped create some of the most unforgettable characters of our generation. We are honored to present this award to him.”

The Charles S. Swartz Award was created to honor individuals who have made outstanding contributions to the field of post production; an industry in a state of an expanding creative palette and of dynamic transition as a result of digital technologies and societal changes. The award was named in honor of the late Charles Swartz, who led the Entertainment Technology Center at the University of Southern California from 2002 until 2006 and helped to build it into the industry’s premiere test bed for new digital cinema technologies. In addition to a long and successful career as producer, educator and consultant, Mr. Swartz served on the Board of Directors of the HPA. Leon Silverman, President of the HPA noted that “Ben Burtt’s career and accomplishments speak to the true spirit of this award, which recognizes impactful contributions. Ben Burtt’s impact to the art and craft of post production and to our cultural legacy should be celebrated. ”

Saturday, September 05, 2009

The First International Kyma Symposium is scheduled for 8-10 October 2009 in the vibrant Poble Nou neighborhood of Barcelona during the annual LEM festival. The preliminary program includes master classes presented by the creators of Kyma, papers and demos presented by Kyma practitioners, and a program of concerts, live improvised silent film scores.

Symbolic Sound and Station 55 invite you to share your ideas, experiences and art with fellow practitioners at the First Kyma Symposium by attending the symposium and interacting with your fellow Kyma practitioners!

We'll be live microblogging from Barcelona (via Twitter) and we'll write a daily report here on Unidentified Sound Object so that even those who cannot attend can still benefit from the symposium. Photos of the event will be available via Flickr.

If you cannot attend...

Not everyone can make it to Barcelona, but your Kyma Sounds can! Cristian Vogel invites you to submit AIFF or WAV files created exclusively with Kyma for an automated "DJ Pacarana" set which will be providing background ambience for the 'Meet and Greet' reception on Thursday evening. Textures, sound design sketches, loops, and ambiences ranging in duration from 1 second to 5 minutes can be submitted in 16 bit, 44.1K WAV or AIFF and will be combined, spatialized, and layered according to a random number generator seeded with "0.8102009". In the spirit of 'Meet and Greet', please include your name and your city as part of the sound (spoken, sung, or otherwise encrypted) so we can meet and greet you virtually! Please send your files to his drop box.

---------------------------------------------Kyma Symposium in Barcelona 8-10 October 2009Preliminary Program (as of 31 August 2009)---------------------------------------------