Aaron McLeran and Dan Reynolds joined us over Skype to talk about the new procedural audio features in Unreal Engine 4.16. Aaron began with an introduction to UE4’s visual scripting system, Blueprint, picking out topics such as execution order, data structures and messaging, and making useful comparisons to the way various features are implemented in MaxMSP.

After a demo video of his procedural music system, Dan Reynold’s opened up his Blueprint graphs for a deep dive into the logic flows controlling several pre-built synthesisers – each one engineered in Blueprint using data structures, and each running in real-time in the engine. Video of the presentation and Q&A is below!

Download Unreal Engine 4.16 here and start making stuff! We hope to run a UE4 procedural audio workshop + show and tell at PANow later in the year.

It was great to welcome Leonard Paul back to PANow, to give a presentation on his procedural music composition for the movie, ‘Beep: A Documentary History of Game Sound‘, a screening of which preceded his talk. Leonard gave us a run down of his work to date with Pure Data, and took us through some of the patches he created while working on the score; revealing the constituent parts and how they are put together, before diving in for some live performance.

We were very pleased to have procedural audio veteran and PANow regular, Paul Weir, speaking at the May meetup.

Paul has been involved in some groundbreaking generative music projects for games and retail spaces over the past 20 years, and he began his talk with an overview of his work to date – discussing the pros and cons of PA, some of the challenges you might meet, and how the technology has progressed over time. Following on from that we had a listen to audio examples of the works and took a look at some of the custom software used.

His work for retail spaces is often driven by the client’s need to stimulate a specific mood in the customer. Paul spoke about how he approaches each brief: weaving together location recording, sound design, system design, and finally on-site installation of the standalone music systems, in places as diverse as banks, airports, outdoor public spaces, and high-end department stores.

Video of the presentation and Q&A is below. (The video framing is off-centre for the first couple of minutes).

Paul Weir is an Audio Director, Composer and Sound Designer, currently working as an Audio Director with Microsoft and for Hello Games’ procedural sci-fi game, No Man’s Sky.

Presentation

Presentation slides

This event and any presentations, handouts, software, documentation or materials made available at this event (‘Materials’) are not provided by and do not represent the opinion of Microsoft Corporation or its affiliates. Microsoft Corporation and its affiliates assume no responsibility for the content, accuracy or completeness of any Materials.

Anthony Prechtl was at April’s event to talk about his current research at the Open University. He is developing generative music software for games which uses run-time data to change a variety of musical features.

Anthony’s Unity demo, Escape Point, is a first person puzzle game in which the player wanders round a 3D maze also inhabited by an AI enemy. He demonstrated the game first without music, then with a static score, and finally with a dynamic score (see video below, far right). For the dynamic score, the intensity of the music increased as the distance between player and enemy decreased and among the musical features affected were: a distortion DSP affected the synth parts, harmony and chords transitioned to minor scales, tempo and volume seemed to increase, The net result gave a feeling of tension that ebbed and flowed in relation to distance from enemy…

Video of the presentation, and audio from the discussion that followed, can be found below.

This event and any presentations, handouts, software, documentation or materials made available at this event (‘Materials’) are not provided by and do not represent the opinion of Microsoft Corporation or its affiliates. Microsoft Corporation and its affiliates assume no responsibility for the content, accuracy or completeness of any Materials.

It was a pleasure to welcome procedural audio pioneer, Leonard Paul, to February’s meetup! Leonard shared some of the highlights of his procedural audio career to date, and talked in depth about his early ambitions to develop tools for videogame integrations, using Pure Data to build prototypes that might one day emerge in a AAA title. He went on to discuss the current state of play, looking at the movers and shakers in the field and where they could be heading next, before leading us into a Q&A session and some open discussion.

During a demo of his music and SFX systems for the educational game, Sim Cell, he explained how he used oscillators, variable delay lines and granular synthesis patches in Pure Data to generate a dynamic soundtrack that could react to different game states, as well produce as the more routine sound effects used for GUI navigation and spacecraft propulsion.

Videos of each section of the talk can be found below, and if you want to try out any of Leonard’s Pd patches for yourself, you can download them here! [right click+save as]

This event and any presentations, handouts, software, documentation or materials made available at this event (‘Materials’) are not provided by and do not represent the opinion of Microsoft Corporation or its affiliates. Microsoft Corporation and its affiliates assume no responsibility for the content, accuracy or completeness of any Materials.

Jorge Garcia is a PANow regular and we were very happy to have him at the front of the room for November’s meetup! His presentation looked at some of the current challenges and opportunities when implementing and controlling procedural audio models in games. He showed how the Open Sound Control (OSC) protocol can be used to establish communication between game engines and audio patching environments like Pure Data, and went on to discuss his own open source implementation of OSC for Unity3D. Since 2011 UnityOSC has been used to build dozens of community-driven prototypes, demos and projects, many of which can be seen in the slides below.

Jorge Garcia is an Audio R&D Programmer with FreeStyleGames/Activision.

Presentation slides

This event and any presentations, handouts, software, documentation or materials made available at this event (‘Materials’) are not provided by and do not represent the opinion of Microsoft Corporation or its affiliates. Microsoft Corporation and its affiliates assume no responsibility for the content, accuracy or completeness of any Materials.

Heavy: A Procedural Audio Development Workflow: generating DSP code from Pure Data for integration into a Wwise/Unreal environment

Martin Roth, Joe White and Andy Farnell presented different aspects of their new procedural audio workflow, Heavy. Addressing many of the common concerns held by audio designers and programmers, they unveiled a workflow that can generate highly optimized C code from (but not limited to) Pure Data synthesis patches, and seemlessly implement them in a UE4 game environment as Wwise plugins.

You can watch a video of the presentation below!

The PD patch, Wwise plugin and Wwise project files used in the fire demonstration can be downloaded here

Presentation slides

Thanks to Paul Weir for hosting this event.

This event and any presentations, handouts, software, documentation or materials made available at this event (‘Materials’) are not provided by and do not represent the opinion of Microsoft Corporation or its affiliates. Microsoft Corporation and its affiliates assume no responsibility for the content, accuracy or completeness of any Materials.

Ignacio Pecino gave a demo of some recent research in procedural audio, and talked specifically about his use of spatial data in simulated physical systems such as cellular automata (‘Life’) to drive sonification models in SuperCollider.

In his Apollonian Gasket simulation, he takes data gathered from the motion of the component discs when they are made to spin like coins, and uses it to dynamically modify parameters in a SuperCollider synthDef.

Using flocking behaviour alogorithms in another simulation, ‘Boids’, he generates complex, evolving soundscapes that have a highly satisfying correlation to the movement of a flock of birds in a 3D virtual environment.

We look forward to more hearing sonifications on his next visit!

Ignacio Pecino is a PhD candidate in Electroacoustic Music Composition at NOVARS. The paper on which this presentation is based can be viewed here.

Presentation slides

This event and any presentations, handouts, software, documentation or materials made available at this event (‘Materials’) are not provided by and do not represent the opinion of Microsoft Corporation or its affiliates. Microsoft Corporation and its affiliates assume no responsibility for the content, accuracy or completeness of any Materials.

Tannhäuser is the PD compiler project by Joe White and Martin Roth. Joe demonstrated how a PD patch is parsed and converted into new, virgin C++ code, ready for use in systems such as audio middleware (Wwise plugins) and hardware effects units (the OWL pedal). From a game audio angle, sound designers could develop synthesis or effects patches in Pure Data and quickly convert them to Wwise plugins for immediate integration into a game project. This would unleash a revolution in how game audio designers approach their work, and we’re looking forward to Joe’s next visit (hopefully with a Wwise plugin demo!) in the coming months.

Later on in the evening we discussed (and got very excited about) the synthesis tools revealed in Alastair MacGregor’s recent GDC talk, if you haven’t seen it I recommend you check it out now!

Joe White is software developer at ROLI in London. You can find out more about Tannhäuser here.

An audio recording of the presentation and discussion is coming soon.

Presentation slides

This event and any presentations, handouts, software, documentation or materials made available at this event (‘Materials’) are not provided by and do not represent the opinion of Microsoft Corporation or its affiliates. Microsoft Corporation and its affiliates assume no responsibility for the content, accuracy or completeness of any Materials.

We were very pleased to welcome co-founder of AudioGaming, Guillaume Le Nost into the the PANow discussions. AudioGaming tools have been most notably used on Django Unchained from Quentin Tarantino, and their client list includes studios such as Soundelux, Lucasfilm, Ubisoft as well as award winning sound designers.

Guillaume took us through some of the history of AudioGaming and described their approach to procedural audio as mix of real-time synthesis and samples, a ‘best of both worlds’ solution that allows them to adopt whichever technique works best for a given problem. A physical modelling approach, although perhaps more analytically accurate, won’t always achieve the best results, and Guillaume proposes the use of ‘physically informed models’ such as the AudioWind plugin, which attributes it’s highly realistic temporal behaviours to wind data gathered from the French national meteorological service.

As the discussion moved into more in-depth technical areas, we were lucky enough to have lead developer Chungsin Yeh at the end of a Skype connection in Paris. Chungsin devoured questions on spectral granulation, methods to synthesise transients, and approaches to analysing physical engine data for the AudioMotors plugin.
Thanks to Guillaume and Chungsin for a fascinating presentation!

This event and any presentations, handouts, software, documentation or materials made available at this event (‘Materials’) are not provided by and do not represent the opinion of Microsoft Corporation or its affiliates. Microsoft Corporation and its affiliates assume no responsibility for the content, accuracy or completeness of any Materials.