Events

Tannhäuser is the PD compiler project by Joe White and Martin Roth. Joe demonstrated how a PD patch is parsed and converted into new, virgin C++ code, ready for use in systems such as audio middleware (Wwise plugins) and hardware effects units (the OWL pedal). From a game audio angle, sound designers could develop synthesis or effects patches in Pure Data and quickly convert them to Wwise plugins for immediate integration into a game project. This would unleash a revolution in how game audio designers approach their work, and we’re looking forward to Joe’s next visit (hopefully with a Wwise plugin demo!) in the coming months.

Later on in the evening we discussed (and got very excited about) the synthesis tools revealed in Alastair MacGregor’s recent GDC talk, if you haven’t seen it I recommend you check it out now!

Joe White is software developer at ROLI in London. You can find out more about Tannhäuser here.

An audio recording of the presentation and discussion is coming soon.

Presentation slides

This event and any presentations, handouts, software, documentation or materials made available at this event (‘Materials’) are not provided by and do not represent the opinion of Microsoft Corporation or its affiliates. Microsoft Corporation and its affiliates assume no responsibility for the content, accuracy or completeness of any Materials.

We were very pleased to welcome co-founder of AudioGaming, Guillaume Le Nost into the the PANow discussions. AudioGaming tools have been most notably used on Django Unchained from Quentin Tarantino, and their client list includes studios such as Soundelux, Lucasfilm, Ubisoft as well as award winning sound designers.

Guillaume took us through some of the history of AudioGaming and described their approach to procedural audio as mix of real-time synthesis and samples, a ‘best of both worlds’ solution that allows them to adopt whichever technique works best for a given problem. A physical modelling approach, although perhaps more analytically accurate, won’t always achieve the best results, and Guillaume proposes the use of ‘physically informed models’ such as the AudioWind plugin, which attributes it’s highly realistic temporal behaviours to wind data gathered from the French national meteorological service.

As the discussion moved into more in-depth technical areas, we were lucky enough to have lead developer Chungsin Yeh at the end of a Skype connection in Paris. Chungsin devoured questions on spectral granulation, methods to synthesise transients, and approaches to analysing physical engine data for the AudioMotors plugin.
Thanks to Guillaume and Chungsin for a fascinating presentation!

This event and any presentations, handouts, software, documentation or materials made available at this event (‘Materials’) are not provided by and do not represent the opinion of Microsoft Corporation or its affiliates. Microsoft Corporation and its affiliates assume no responsibility for the content, accuracy or completeness of any Materials.

Rob Hamilton’s UDKOSC project takes game data from UDK and uses it to control external audio engines such as SuperCollider, ChucK or PD. The implications for dynamic music and audio for games are profound and throughout his talk and demonstrations Rob showed how we can parameterise any actor that can generate data in a game engine, from the very big (herds of elephant-like creatures) to the very small (individual bones in a bird’s skeleton).

Central to Rob’s presentation was the UDK built Echo::Canyon project, a multi-user virtual environment in which performers at locations around the world can move avatars around a purpose built environment, interacting with the landscape and its carefully positioned landmarks to create a rich and evolving soundscape.

Robert Hamilton is a Ph.D. Candidate in Computer-based Music Theory and Acoustics, at CCRMA, Department of Music, Stanford University.

A recording of the presentation (given via Skype) and Q&A is at the top of this post (or you can download it here).

This event and any presentations, handouts, software, documentation or materials made available at this event (‘Materials’) are not provided by and do not represent the opinion of Microsoft Corporation or its affiliates. Microsoft Corporation and its affiliates assume no responsibility for the content, accuracy or completeness of any Materials.

Christian Heinrichs proposes new, expressive ways for sound designers to control procedural audio models. During his presentation he used a touchpad to generate xy, speed, touch area and pressure data for a model built in PD, to play back a wide variety of creaking door sounds. Mirroring the way film Foley artists work, game audio designers might use a controller like this to perform procedural audio Foley during gameplay sessions, generating sound effects for specific instances of events. Foley performance data could then be analysed by an AI system to generate a control layer, which would be used to perform in-game Foley as expressive as the original; thus bringing forth a kind of virtual Foley artist…

Please have a look at the slides below. You can listen to the presentation using the player at the top of this post.

This event and any presentations, handouts, software, documentation or materials made available at this event (‘Materials’) are not provided by and do not represent the opinion of Microsoft Corporation or its affiliates. Microsoft Corporation and its affiliates assume no responsibility for the content, accuracy or completeness of any Materials.