Creating Soundhttp://creatingsound.com
How to create sound effects and where to find the tools to do itWed, 19 Feb 2014 14:18:15 +0000en-UShourly1https://wordpress.org/?v=4.6.1How to create sound effects and where to find the tools to do itCreating SoundCreating Soundjack.menhorn@gmail.comjack.menhorn@gmail.com (Creating Sound)2012How to create sound effects and where to find the tools to do itCreating Soundhttp://jackmenhorn.squarespace.com/storage/creatingsoundpodcast/CreatingSoundLogo.jpghttp://creatingsound.com
Interview with Simon Ashbyhttp://creatingsound.com/2014/02/interview-with-simon-ashby/
http://creatingsound.com/2014/02/interview-with-simon-ashby/#respondWed, 19 Feb 2014 14:18:15 +0000http://creatingsound.com/?p=4850

Hi Simon, to get things start can you tell us who you are, where you are from, and what you had for breakfast?

I’m from Montréal, Québec, and I’m one of the guys behind Wwise. I’m also the father of three great kids that occupies the most part of my ‘free’ time. They love cereals in the morning, but I’m more of a cheddar and toasts kind of guy; maybe some old English genes from a couple of centuries ago…

Do you have any hobbies outside of the audio discipline?

Audiokinetic and my wife and kids are getting most of the hours of my days. My free time is usually late at night where I enjoy reading graphic novels. Otherwise, I like woodworking in general, and one of my long term goals with this is to do fine lutherie eventually. Making guitars as I’m playing a bit of it, but also crafting violins seems like the ultimate refinement in woodworking. It’s no simple furniture you put your clothes in, although some are notable masterpieces, it’s something you can make music out of it, and that is magic!

Let’s talk about next-gen consoles. What do you think is the most important advancement in next-gen consoles in terms of audio?

If we keep doing games like the last generation with pre-recorded audio files, there are not a lot of differences with the exception that you can fit more assets of higher quality and insert more runtime DSP effects. Taking this route, there are no reasons not to deliver a good game experience apart from managerial ones (not enough resources, budget, time, etc.).

Teams that will take a procedural approach to sound are the one that will make advancements to next generation console audio. But, there’s a risk here and the R&D investment on that end is not a hundredth of what we see for graphics. I remain confident that some studios will come up with pretty convincing audio experiences in the next few years, but I’m not sure, even though I hope so, the game industry will see a major and generalized leap forward in audio. We need more investments in R&D for the next revolution in game audio to happen I believe.

What do you think is the “next big thing” in terms of audio?

For AAA games, I see a need for two “next big things”. First, we need more procedural audio just to cope with the sheer quantity of assets to be produced and integrated, and the interaction of all those objects between them and with the environment. Then, we need to improve on the mix and ‘audio focus’ side of things to make these worlds crisp, precise and non-annoying over time.

For indie games, the “next big thing” remains to provide more love and budget to audio. Most indie developers now realize that audio is important, that certainly improved during the last decade. But, still just a fraction of them are willing to invest the real amount of time and money that is required to produce a great game soundscape.

If you could change one thing about how the game industry handles audio, what would it be?

I’m certainly biased here, but realizing that middleware developers are as passionate as game developers about what they do is a change that would beneficiate everyone. If game developers were paying the right value for the products they license, they would get products that would be even more creative, optimized and productive than what they are now. It would definitively pay for itself and the end users would get better quality products. I truly believe in this virtuous circle.

What do you want to see game audio become? What types of experiences do you personally want to have?

As a ‘consumer’, I’m looking for coherent and seamless experience. I’m expecting transparent game state transitions and loading times. I don’t want to notice anything exposing the mechanisms of the game, or worst, of the technology used by the games I play. The game design and game engine are mostly responsible for those moments that underline with broad strokes to the player that they’re playing a piece of software and not necessary a meaningful story.

Here’s a bold statement: I believe audio people, at least the musicians, are among the few trained professionals in the videogame industry that spent all their scholarship and professional years refining the art of delivering transitions, modulations and nuances with the most appropriate contextual expression. Audio people should stand up and get more involved with the game mechanics to deliver more fluid experiences.

Many people know you from Audiokinetic, but you were a game sound designer back in the day. What was your favorite project that you worked on in a sound design role?

I liked the three Playmobil games I’ve done because they were our first games for everyone at Ubisoft Montreal back in 1997-1999. It’s been two years of crazy (and underpaid!) hours, but I’ve learned a lot during that time, on all aspects of life.

‘Jungle Book – Rhythm & Groove’, a dance game à la ‘Dance, Dance Revolution’, is also one of the most enjoyable project I’ve worked on. Creating this game almost from scratch in nine months where I was both a game and sound designer has been a particularly fun and vibrant challenge. I also ended up designing the game editor in which we were synchronizing everything together (music transitions, arrow sequences for all difficulty levels, power ups, dialogs, camera positions and movements, animations, etc.). That has been my first experience in designing a user interface solution that fits with the game needs while making sure the team was fully productive with it.

When you picture the audio designer of the future, let’s say five years from now, what do you think we will need to know how to do? How do you think our jobs will be different from what we do today?

A bit like we’ve seen the middle class shrinking in occident in the last decades, I think we’ll see more and more one-man bands – ultra generalists creating and integrating most of the audio content – and, on the other end, ultra specialized designers – people assigned to highly specific aspects of the sound design. There’s no right or wrong here, just the expression of the business and the games developed these days.

When I give lectures or courses to students, I always emphasise the importance of them to learn as much as they can on all aspects of audio from music theory, sound design and post-production to acoustics, physics and computer science. We use it all during our career and for many of us, on a daily basis.

Thank you for your time, Simon! Enjoy your toasts!

Simon Ashby is co-founder of Audiokinetic. Ashby is responsible for the product development of Wwise® now used by more than 500 games. Prior to Audiokinetic, Ashby worked as a Senior Sound and Game Designer on several games.

With his vast industry experience, Ashby is a frequent lecturer and panellist with, as main theme, the role of sound production and integration in the overall experience of video games. In 2011, Ashby was honoured with the inaugural Canadian Game Development Talent Award as the “Audio Professional of the Year”.

]]>http://creatingsound.com/2014/02/interview-with-simon-ashby/feed/0DSP Audio Programming Series: Part 2http://creatingsound.com/2014/02/dsp-audio-programming-series-part-2/
http://creatingsound.com/2014/02/dsp-audio-programming-series-part-2/#commentsMon, 17 Feb 2014 15:29:16 +0000http://creatingsound.com/?p=4796It’s been quite some time since part 1 of this series (all the way back in June of last year) where we looked at and implemented a basic delay effect. In this part, I will be introducing filters, another staple of the audio effect and processing toolkit. Audio filtering is a very large and complex topic, spanning many different types and designs, so we’ll be looking at a fairly basic resonating low-pass and high-pass filter in this part to keep things simple and manageable.

Like part 1, the code will be using Portaudio for cross-platform audio input/output. The code will also be made available on Github for both Mac and Windows to do with as you please. We will be sticking to a command-line based application as well, which means it is less practical to adjust parameters in real-time unfortunately. This wasn’t a big problem when looking at a simple delay effect, but having real-time control over the parameters of a filter is far more convenient. Fortunately it’s fairly simple to set up a basic GUI application that connects controls to the parameters of the filter (the README in the Github projects contains some tips of setting up a GUI in both OS X and Windows).

Ok, let’s get to it!

Digital filters can be categorized into two basic types: finite impulse response (FIR), and infinite impulse response (IIR). The former works by mixing the samples of the original signal with delayed copies of the input, which is essentially a feed-forward network, whereas the latter combines the original signal with delayed copies of both the input and the output, making it a feedback network. IIR filters have the advantage of requiring fewer delay elements than FIR filters to achieve comparable results, and are usually less computationally expensive, but they introduce a non-linear phase shift in the signal in contrast with FIR filters that can be designed with linear phase.

Here we will be looking at IIR filters because they are more commonly used for real-time audio processing, though FIR filters have many important applications in digital audio as well (such as anti-aliasing filters and noise reduction).

The most basic implementation of an IIR filter is the biquad direct form I, with the equation:

y(n) = a0x(n) + a1x(n-1) + a2x(n-2) – b1y(n-1) – b2y(n-2)

where x is the input and y is the output. Thus, x(n) is the current input sample, x(n-1) is the 1-sample input delay, and x(n-2) is the 2-sample input delay. It follows then that y(n-1) is the previous output and y(n-2) is the 2-sample output delay. The a and b terms of the equation are the filter coefficients, which determine the response and characteristics of the filter (which includes its type, such as low-pass, high-pass, etc.).

This topology is illustrated below.

Direct form I filter topology.

The corresponding code in CSFilter that calculates the outgoing sample according to the equation above is straightforward:

The last block of code in the method above updates the filter state delay variables before returning the filtered sample. This is similar to dealing with buffers in the delay effect from part 1, except here we have two delay buffers (one for input and one for output) of only two samples each.

Now that we’ve covered the filter equation, let’s look at calculating the filter coefficients.

Where above, fc = cutoff frequency, fs = sampling rate, and Q = quality factor. With Q set to a value of 1 / sqrt(2), there will be no resonant peaking in the filter, while greater values of Q introduce a resonant peak at the cutoff frequency.

It is very helpful to plot the frequency response of a filter in order to visualize how it will affect the audio signal, so below we see plots of both the low-pass and high-pass filters at a cutoff frequency of 2000Hz and varying Q (the frequency axis is logarithmic).

LPF with cutoff frequency at 2000Hz and Q = 0.707.

LPF with cutoff frequency at 2000Hz and Q = 2.3.

HPF with cutoff frequency at 2000Hz and Q = 0.707.

HPF with cutoff frequency at 2000Hz and Q = 2.3.

The cutoff frequency for low-pass and high-pass filters is defined as the frequency at which attenuation is -3dB, and we can see with a magnified view of the plots that this is the case with this particular filter.

Attenuation is -3dB at the 2000Hz cutoff frequency.

Attenuation is -3dB at the 2000Hz cutoff frequency.

We can now proceed to the implementation by filling in the callback function we supply to the Portaudio engine. This is where processing happens; Portaudio provides us with a buffer of audio data (input) and a place to store our output that is sent to the audio hardware. In order to apply the filter effect to the incoming audio, we pass in the CSFilter instance to the userData parameter, and this gives us access to the class methods that implement the filter.

To finish off, let’s briefly examine how the filter equation and coefficients work to produce the signal the it does. As we know from the Fourier theorem, any complex signal is made up of an infinite (at least in the continuous, analog world) number of frequency components, which can be decomposed into its individual frequencies; i.e. sine waves. So let’s see what happens when we send a single sine wave through the filter with its samples passing through the direct form I network illustrated above; first at 200Hz, then at 2000Hz. The filter has the same parameters as in the above plots — 2000Hz cutoff, and a Q of 0.707.

Top: original 200Hz sine wave. Bottom: filtered 200 Hz sine wave.

We can see from comparing the two that aside from a modest phase shift, the sine wave remained unaffected by the filter. When we pass a 2000Hz sine wave through, however, it’s a different story.

Top: original 2000Hz sine wave. Bottom: filtered 2000Hz sine wave.

The phase shift is obviously more pronounced and the amplitude of the wave has been reduced from 1.0 to 0.7, which is equal to -3dB, exactly what we would expect at the 2000Hz cutoff frequency. Extrapolating from this, it’s clear that higher frequency sine waves will be attenuated more and more, resulting in the low-pass filter effect. The high-pass filter works the same way of course, just reversed.

There’s really no limit to the use of filters in audio processing, from the practical to the artful. I hope this has been an interesting introduction to digital filters. Here are the links to the Xcode and Visual Studio projects on Github if you’re interested in exploring more on what we’ve built here.

4.The ever lingering fear that my skills and abilities will deplete if I’m not creatively productive with my time

IN THREES

1) Name three sounds that make you glad to have ears.

1. Thunder

2. Underneath an overpass

3. My wife’s voice

2) Name three sounds that cause you physical discomfort.

1. Nails on chalkboard

2. Diesel motor pass bys at close range

3. Balloons popping – while it no longer brings me physical discomfort, i was terrified of this sound as a child. I would always have to leave the room at the end of birthday parties when everyone popped the balloons. Such immense fear and anxiety for that sound back then.

3) Address one way to change three of those sounds or three ways to change one of those sounds.

Diesel motors changes:

1. Nano/bio tech ear implants have have built in compressors, gates, and limiters. It could sense the approaching noise and duck it as soon as it hit a certain threshold.

IN TWOS

1) List and describe two projects on which you’re currently working.

At Volition I’m currently working on an awesome project for next gen which I can in no way, shape, or form describe or talk about

In May 2013 I started my “sound design a day” project in order improve some of my sound design skills. Each day I record a brand new sound and then use that sound to create a short sound design and/or music piece from it.

2) And how are they both going?

The new Volition project is going really well. We’re making immense progress. It’s always awesome to see a brand new game come to life after it started merely as an idea stated in a couple of sentences.

The everyday project continues to march on. Some days it’s incredibly rewarding and I’m really happy with what I’ve come up with – other days I roll my eyes at the crap I managed to poop out..

3) How do you feel they are challenging your current skill set?

We’re going to be using a brand new set of development tools at Volition on the new game. So very soon I will be diving into a brand new world of making games. It’s always incredibly exciting, and also slightly terrifying. But one thing is for sure – I’ll be coming out the other end with tons of new skills

The everyday project is continuing to improve my speed and ear as a sound designer. I’m able to do things in a matter of minutes/hours that would have taken many hours or even days just six months ago. The skill set improvements are immeasurable. It’s a perfect platform to experiment and try new things: new workflow, new technique, new plugins, new tools…everyday is a blank slate ready to be filled with whatever you want.

IN ONES

1) Name one environmental element of the creative process that you find essential.

Lighting. I find it’s much easier and soothing to work under multiple, dim, colored lights. I personally can’t stand fluorescent and bright white incandescent lighting. In my home office I have blue, green, and pink lighting and at my office at Volition I have blue, green, and yellow lighting.

2) What is one area in which you hope to improve your work?

There are a lot of different things I’m trying to do to improve my work – but if i had to pick one – it would be continuing to improve and refine my abilities to look at something completely abstract, perhaps a moving graphic design, or a piece of artwork, and be able to look at it and say: what can i record in the real world and how can I manipulate it to sound like this abstract thing that has no basis or reference in reality.

3) What is one thing you would like people to know when listening to your work?

I always try to create emotional and immersive experiences with my sound design.

About the 3×5 Interview

The “3×5” is a non-traditional interview series that encourages creative and personal responses from its participants. While the core structure remains intact, I occasionally update the sets of questions to keep interviewees and readers engaged. Although the resultant replies of the participating audiophiles may be informative or instructive, my hope is that the interview will encourage conversation and a sense of camaraderie within the sound design community.

What does it mean to be “next gen”?

As I write this, the first round of excited customers are opening their new Xbox One and PS4 consoles. We’re here! The next generation is upon us! So – as follows with any console generation transition – we are all getting asked what we can do to be “next gen.”

The 8 gigabytes of memory that the new consoles offer is certainly a generous improvement over the previous generation’s 512 MB. But will that memory increase lead to an improvement in audio format quality that in and of itself would be impressive enough to declare next gen status?

The next hallmark in audio fidelity that could be considered “next gen” will likely be the abandonment of compression altogether. However, it’s unlikely that even with 8 gigabytes of memory, we will see audio memory budgets large enough to allow uncompressed audio to become the new normal. My guess is that we’ll have to wait out this generation before uncompressed audio becomes a new standard.

If that is true, then we’ll need to look to different areas to find our next gen bona fides.

Perhaps those bona fides lie within the pursuit of a true mixing AI? This has always been at the top of my list of dream projects, and if successfully done, is probably one of the single biggest tools we would have to begin to approach the quality of a film mix. But budgets being what they are, this is probably an effort beyond any but the most lavishly funded audio departments.
However, that doesn’t mean we can’t start building the foundation now!

After all, a mixing AI is a search for context; a frame to frame real-time contextual understanding of what the player is experiencing and matching that experience with resulting changes in our games. To that end, I believe that building emotional awareness will be the true hallmark of a “next gen” video game.

The games we remember best are the games that succeeded in establishing strong emotional connections with us. Titles that succeeded in that effort are the real gems of our industry and it is within those emotional connections that we will find the greatness of our art.

Music as the first battleground for context

My first forays into contextually-aware systems have always involved dynamic music systems. Not only is music one of the most easily identifiable elements of any game, it is also one of the strongest tools we have to tug on people’s emotional strings. It’s hard to think of other methods that are as universally accessible and have the same power to improve an emotional connection or evoke an emotional response as music.

The games that do dynamic music the best are the ones that are able to get away from a 1:1 relationship with basic game state changes. We are all familiar with combat music systems that are married to game state changes so literally that they become enemy telegraph systems. This level of implementation meets the minimum need for a change in tone during combat, but when a systemic music change is guaranteed, it becomes a redundant layer of messaging that impoverishes its emotional power. The player doesn’t need the musical cue to understand that they are in a fight and the utterly predictable nature of the music change ends up ruining what emotional impact it could have otherwise had.

Nuance is often treated as a luxury in game development, but without nuance, a game will never reach its true emotional potential. Without nuance, we’re left with a detached and repetitive mental exercise. While this can sometimes satisfy the needs for many projects and players alike, our most ambitious titles will have to explore greater domains than the purely intellectual.

So how do we begin to bridge that emotional gap? How do we account for all the myriad of situations that can develop unpredictably and reject the meaningless for the meaningful?

Building the Foundation of a Contextualized System

Identifying your context

To use Borderlands 1 and 2 as an example, I tried to approach the problem of redundant combat music by focusing on the context of the player and the level of intensity they were experiencing at any given moment. For other games, this could be any subjective value or critical theme that makes the game unique. For Borderlands, the quality that seemed most important to key off of was intensity.

Developing a system that was aware of when the player felt the game was most intense necessitated a way to reject the meaningless and unimportant fights for the more challenging and meaningful engagements.

In order to get that balance right, a lot of different variables needed to be tracked and interpreted. As those variables became identified, edge cases emerged that needed to be tamed with additional controls and constraints.

In Borderlands, this resulted in an experience where players that are over-leveled and unchallenged will generally not hear combat music until they reach a point in the game at which power levels between the player and enemies begin to even out and the player is once again introduced to threatening situations.

Establishing a Context Pool

Making a determination about context first requires that the system be capable of recording multiple key values over time and distilling these values down to a single variable. I’ll refer to this single variable as our “context pool.” By monitoring a context pool over time, we can establish a delta which informs us about the rate of change in key values. Once we understand the rate of change, we can then use that delta to form some conclusions about the behavior of the context pool.

In other words, you can use the rate of change to understand sudden spikes of the context pool vs slower increases over time. This opens up possibilities of reacting differently to slow vs. fast changes in your key values, providing some differentiation we would have lacked without the delta.

To give another Borderlands example where I was focused on intensity…

Three new monsters spawning and then attacking the player over 20 seconds is a very slow ramp up in intensity (or possibly none at all).

Three new monsters spawning and then attacking the player over 3 seconds is a much more aggressive ramp up in intensity.

If I were only keying into the state change from [no combat] -> [combat], these two situations would appear to be identical scenarios – but they are actually quite different situations and are best served with different approaches.

An Alternative to Using the Delta

If the rate of change itself becomes a problematic control for the context pool, another way to approach interpretation of your key values is to consider your context pool to be an upwards pressure on the system. If a constant downward pressure is then applied that operates at a fixed rate, you can use that downward pressure to institute a control against a slow increase in the context pool. For Borderlands, this approach became ideal for us because we wanted to generally ignore ramps in intensity over long periods of time.

To try and display that in flow chart format, here is how our high level logic worked in Borderlands 1 & 2 for determining when to play combat music.:

Click for full size.

Building the Music System for Borderlands 1 & 2

Now that we have an overview of an approach to understanding context, let’s explore how these ideas were applied to the Borderlands universe and some of the logic or thinking behind the decisions that led to the music system in both titles.

Establishing key values

Key values had to be chosen that inform us of intensity. Since intensity revolved solely around combat in Borderlands, the bulk of our key values came from the enemies themselves. Some of these values were:

Enemy level vs. player level

Inherent threat/challenge of enemy type

The badass rank of an enemy (Badasses = Elite Monsters)

Number of enemies

Player health

Shield health

Time since last combat.

Setting an upwards threshold

In Borderlands 1, I found that on average, five basic enemies of equal level (skags in this case) was a point at which an average player would begin to experience some pressure. The sum value of five skags became my initial reference point for what constitutes a dangerous situation. This value then became the upwards threshold and was tweaked slightly over time.

Deciding against keying into acceleration of threat

Originally, I thought that it would be really cool to have different changes in music based on how fast the context pool changed. If we keyed into the acceleration of threat, we could accent a slow intensity ramp differently vs. a fast build. But when considering budget and time constraints, it became apparent that we didn’t realistically have the resources to support that level of detail. Additionally, it would have complicated the writing process. So to simplify things, we used the raw threat value in the context pool instead of the delta of threat.

The importance of the decay rate

When imagining how the system would work, it became apparent that once an enemy added its threat value to the context pool, we couldn’t just remove it when they died. Even though technically dead enemies are no longer threats, if we removed them, the context pool could spike or plummet so often that it would render much of our data unusable.

The decay rate also served as a control against sudden loss of threat within a group of enemies. This meant that in a situation where 3/4 of an enemy group died within a small window of time (not uncommon in 4-player Co-op), we would not lose the combat music.

The converse situation where enemies gradually add threat over time was also offset by the decay rate. If the rate of incoming enemies couldn’t exceed the decay rate for an entire combat encounter, then the player was experiencing a relatively relaxed fight and didn’t hear combat music.

Use of directional thresholds

Since a meter provides bi-directional information, we set thresholds that only responded to a specific direction of movement. The threshold at which combat music was activated would only respond to an increasing value. It ignored decreasing values. This helped solve for situations that can produce a “ping pong” effect when the context pool hovers above and below that combat music threshold. Had a single threshold value been used, we would have had rapid messages sent to turn the music on or off. This seemed like the simplest way to bypass that problem without having to rely on timers.

Soft exits

Another benefit of having directional thresholds is that it allowed us to create a soft exit from the combat state. Because we could set the exit threshold lower than the trigger threshold, we created a zone where we could prepare for an exit without having to actually trigger an exit. We used that zone to start fading the music to a lower volume level. Then, if we hovered in that zone long enough or exited downward, we would fade out the combat music more rapidly.

Conversely, if the context pool increased from new threats enough to exit upward, we surged the music volume back up (avoiding awkwardness that can arise when triggering music too soon after exiting a combat state).

The necessity for overrides

Because Borderlands has plenty of boss and mini-boss battles, we had to design a way to set overrides where we forced the system into a combat or an ambient state and ignored the context pool entirely. This also created a requirement for returning the music system to an automated state after player death and other circumstances that could lead to the player exiting the context of a boss fight or scripted moment.

To help put this all together in a visual way, here is a diagram that represents how this would all look if we actually had a visual meter to represent the context pool and all the controls that were mentioned above.

Click for full size.

Truth be told, it would have been awesome to have this as a visual meter in the game, but the time to build a visual tool just wasn’t there and more pressing needs had to take priority. So we used debug text on screen to balance the system or reveal how it was behaving while testing the game.

A Story of Successful Contextualization

Before wrapping this article up, I have a story that I’d love to share. The experience I had highlights why I strongly feel that when we get context right, we provide some of the best experiences that modern games can offer.

So, we were a few weeks out from ship on Borderlands 2 and I had loaded up the Crater Lake level to run a balancing pass on some combat music parameters. I was balanced to the recommended level for this map with average gear and proceeded to wade into combat atop a long elevated path with a ravine of lava to my right and a high mountain wall to my left. I had a safe distance to travel before encountering any enemies and began to run down the path, deciding to just barrel right into them.

In my first wave of enemies, I fought some run of the mill bandits who did some damage, but really were nothing to worry about. I dispatched them easily – no combat music had played yet, good! It was working!

The second wave had spawned by now, and as I was killing off one of the last bandits from this group, more powerful enemies entered the fight before I could completely finish the wave.

While I was trying to get rid of a particularly obnoxious shock nomad, I accidentally shot the helmet off a Goliath, who then began to perform his rage transformation.

At this point, detecting the increased threat value from the now raging goliath, the music system kicked in and I started to get a little more pumped/worried. As the music surged, the Goliath began to charge towards me, bellowing insults. I started back-pedaling furiously as I now was in real danger of dying. Having not paid attention well enough, I emptied a clip in to the Goliath and went to reload… only to hear an empty *click click* from my gun… to which my character suddenly spoke “Oh no… not NOW!!!” as the Goliath closed in roaring and finished me off.

None of it was staged, all of it was emergent gameplay and I had not expected any of it when I started walking towards the enemies. I didn’t even mind that I died, I just thought… “Man, that was awesome!”
That moment has stuck in my memory ever since.

In conclusion…

As artists, we almost never reach the point of perfection we desire before we have to release our work into the wild. This is definitely true for me when it comes to the music system for the Borderlands series, but I’m happy that even under aggressive timelines and other constraints, we were able to take a small step in a new direction. I’m excited to improve the system further in the future and to also experiment with applying this technique to non-music related areas.

It’s an exciting time for games and I’m eager to see how other teams tackle the challenge of building emotionally-aware systems over the next several years!

Thank you for reading!

-Raison Varner

Raison Varner is a composer and sound designer at Gearbox Software. He was the audio lead for Borderlands 1 and 2 and is a contributor to the sound track for both games. Raison can be contacted via Twitter (@raisonvarner) and his music, including his contributions to the Borderlands soundtracks, can be heard on Soundcloud (www.soundcloud.com/rvarner)

For the past six years, I’ve been lucky enough to say my career is home in the video game industry. While I’m certainly not the oldest or youngest in my class of game developers, I still believe there are some powerful things I’ve learned in this short amount of time so far.

What I’ll leave underneath here is a collection of general “Truths” and nuggets of wisdom I’ve picked up in the the time I’ve spent in this loco world of game development. Not all of these are audio-specific, actually. This advice is more cultural and also based on efficiency, so they can be applied to any discipline, even outside of games.

Anyway, If you’re anything like me when it comes to list articles you probably aren’t even reading this part, anyway, so here is the good stuff:

Relationships With Industry Peers Matter More Than I Used to Think

I was fortunate enough to jump into the industry at a relatively young age. At 21, I thought I had made it; my career dues finally paid. No longer would I have to clamor for the respect of my peers or spend countless hours locked away in my bedroom studio, shunning my social life while eating frozen pizzas and ramen on a tight weekly budget. The new career opportunity represented a giant feather in my cap. Part of me wasn’t wrong to think that I had put in my time, but little did my shamelessly expanding ego realize, I had barely begun to prove my worth.

For the first two to maybe even three years I spent in the industry my social and working interaction with developers outside of my own department was kept to a minimum. In most ways, this didn’t exactly hurt my work or me since I was considered a junior sound designer and therefore the leads above me would track and manage most of my tasks. I was there to go in, create & implement as much quality content as I could and go home. So why did relationships with any developers actually matter?

Unfortunately, back then I mostly wanted to meet people outside of my work, preferably other early-20-somethings that were sharing the same stage in life as me. I had struggled over the past few years to really connect with people my age in order to get the career I wanted, and at the time, I really felt like I was missing something because of it. I ended up focusing more on what was occurring outside of my life at the studio than I spent on the inside.

So why does this matter? Because I came to find this was an easy way to reach complacency. Subscribing to this false narrative that I had already somehow proved myself squandered some of my ability to grow not only as an artist but as a true contributor to the development process. I wasn’t directly collaborating with people outside of my department enough to seal a more cohesive vision for the end product. Instead, my process involved me sizing up visual effects, animations, designs and user interface sounds on my own, without the outside perspective of the originator of my work. It was an assembly line of sorts, albeit an effective one sometimes, but nevertheless, an assembly line as opposed to an involved collaboration.

When I arrived at Gearbox after two years at my first gig at Volition, I was “thrown to the wolves,” so to speak. I was expected to not only create the content, but it was quickly expected of me to facilitate information more on my own by meeting the who’s who in every department for the various levels, creatures and missions. Almost instantly connections began forming, and trust and higher levels of camaraderie were established.

By the time we finished up Borderlands 2, I felt more at one with my fellow teammates than I did in my Volition years and there was a reduction of communication blunders that could have created inefficiencies in the project pipeline. Together, we put our blood, sweat and tears into something and we could feel it working out with great inner reward.

This sort of relationship development even makes an event such as a release party feel infinitely more like a real celebration because I could approach so many people I worked with, reminisce on that one bug or task we accomplished together, and laugh off the challenges we accomplished together.

Finding the work myself and having a personal thirst to form the right relationships really worked out and brought out a better result in the end product.

Don’t be the foolish, naive, self-centered person I was in the beginning. Think about it, if some people have spent a few years working alongside one another closely, chances are they can count on the other one to do good work or move things along the production pipeline to the right people. It may take two years to really embed those close personal working relationships with your comrades, but it starts with you, the person on the individual level, taking that step on day one in order to make that happen. Trust me, it makes doing what you do infinitely more satisfying.

Relax, We’re Making Games

I’ve seen this a million times before. Some guy has been crunching for three months and already he’s gained ten pounds from too much munching on savory late night dinners. The guy’s had it. He’s curt to people in emails, taking longer than normal in the bathroom, and is visibly stressed everywhere he goes.

Many of us haven’t been the easiest to work with. I’m not going to pretend like I’ve always been. Let’s face it, game development isn’t quite as simple as those famous old Westwood or Collin’s College commercials made them out to be. Sometimes all voice lines stop working right before certification or someone accidentally removed the sound from the hundreds of animation files you’ve spent weeks on.

Besides the obvious, “Don’t be that guy,” rant I could go on about this I’ll ask you to reflect upon the infamous line from one of those college commercials: “Can you believe we get paid to do this?” Yes, the adverts are hysterical in their inaccuracy to the point that they’ve spawned enough memes to last us generations, but the line I just quoted is actually a mantra I’m still not afraid to live by.

No, I still can’t believe I get paid to do this. Yes, there are companies that are bad companies to work for. Yes, it does get stressful, but you’re trying to create the fun and give someone the happiness that you received when you played your favorite video game and enjoyed someone else’s passionate dedication. You’re not digging a ditch, you’re not serving waffle fries to grumpy, bigoted customers, you’re doing your best to expand upon an artform that’s still arguably in its infancy; an artform that’s changing the way people interact, hear stories, solve puzzles, learn, play together and enjoy one another’s company.

In the grand scheme of human history and our relationship with art as a species, it actually matters… and for most people, it is fulfilling, especially if they can channel the right energy into their passion with balance and stay on the quest to better themselves. No, that doesn’t mean I think you need to be happy about crunching all month or anything, it’s just a pattern of mindful thinking that you can choose to subscribe to. It may not work for you, but it has been a powerful ally during the harder times that has kept me focused, kind to my colleagues, and eager to finish the job.

Working Hard and Crunching Are Not the Same Thing

Oh God, you’re probably thinking to yourself. This guy’s going to write literally the millionth article on crunching in the business. I know…and I could go on about how awful it is for a person’s health and how there’s a point of diminishing returns with work efficiency, and I will touch on those things, but still, here’s my take.

I’ve always avoided crunch as much as possible myself. Not to say that I haven’t been up at the studio until 2am sometimes, but for the most part, I try to take the most efficient path to victory, and it has taken me a long time to get better. My obsession with the quality of my work in the earlier years caused me to often get stuck in long iteration cycles. While I had good intentions to make the game sound as awesome as possible, it would often take me far too long to do a sound design pass on even a simple scripted moment.

When work began ramping up more on Borderlands 2, my task load grew to the point where my long iteration cycles began to become more noticeably unsustainable. I began to set firmer, tighter goals as well as establishing clearer priorities on what to get done first during the week. By doing so, I would find myself actually improving my sound design by facing that added challenge of a self-imposed looming deadline. That mini-pressure became a creativity brain-hack in some ways, allowing myself to get better results faster. Now, that doesn’t mean everything I created couldn’t have been done better with more time, of course not, but this is game development and there really isn’t ever enough time.

With my new practices I just mentioned coming into play, I would find myself working smarter and crunching less. I can no longer justify starting out the beginning of my day by setting the expectation that I’ll be at the studio until midnight. Adopting that mentality just made me spend more time on the same amount of work and exhausting myself further as that practice began to repeat on a daily basis. At that point, the quality of the sound would begin to suffer, my efficiency would begin to fade and most importantly, my health would get worse by taking on the extra stress and time spent being sedentary throughout the day. Avoid it at all costs, even if your studio’s culture puts pressure on you to do so. Of course, sometimes it’s unavoidable because deadlines loom and a game needs to ship. Be mindful of your health and your work.

Like I said, this wasn’t 100% a sound article, and I think it could help just about anyone in any field possibly, not just sound or game development. Who knows, maybe you read it and didn’t learn a thing! In that case, you are way ahead of me and Godspeed to you! Thank you for taking your time to read my few nuggets of wisdom. I myself have a long way to go before I’ve figured any of these things out!!!

]]>http://creatingsound.com/2013/12/a-few-things-ive-learned-in-six-years/feed/0The Threshold of Pride and Shamehttp://creatingsound.com/2013/09/the-threshold-of-pride-and-shame/
http://creatingsound.com/2013/09/the-threshold-of-pride-and-shame/#respondSun, 22 Sep 2013 15:55:29 +0000http://creatingsound.com/?p=4730You’ve just entered production with a prioritized list of audio assets that you need to design.

The ones at the top of the list are the important ones. Those are the ones where several disciplines are putting their efforts to culminate in something awesome. You know you’re going to need lots of time to get the audio just right. If the audio falls flat, then everyone’s efforts will be diminished.

And then there are the assets at the bottom of the list. The low priority sounds. You tell yourself, those just have to be good enough. Not only will the customer not care as much about those assets, but even you don’t care as much about those assets, and can acknowledge that they need only be serviceable so that there will be enough time to put real effort into the important sounds.

But then the time comes to design those serviceable sounds, and you end up putting just as much heart and soul into those as you would the others, and you end up having to spend way more time than you’d originally bargained for to finish everything. And even with the extra time spent, you still wish you’d had more time to spend on the important sounds.

We know when our work has reached a quality level that is to our satisfaction. And for most of us, that bar is exceptionally high, since creative people tend to be overly critical of their own work. There is a moment when our work has surpassed even that high bar of self-criticism, a moment where we feel a true sense of pride. For me personally, this moment also comes with a weird sense that I’m listening to someone else’s work than my own. I call this the Threshold of Pride.

There’s no telling how long it will take to cross that threshold. Could take hours. Could take months. And we’ll take that time, because when we pass that threshold and get that reward of being proud of our work, it is so glorious that it can carry us all the way to the next time we achieve it.

The problem is that until we get that feeling that we’re truly satisfied with what we’ve created, there is a lingering element of shame. Pretty much everything beneath the Threshold of Pride is shame and embarrassment and it’s filled with caveats and disclaimers and explanations about the work that remains to make it truly satisfactory. Here’s a visualization!

I tried to make it to-scale.

And yet spending less effort on some assets is exactly what we sign up for when we decide to work on those sounds at the bottom of the prioritized list.

I know that not everyone has an issue with this, but for the people out there like me who do struggle with this, I want to say that making something that is merely good enough is a skill that can be learned with practice. It is possible to override our feelings and force ourselves to spend less time and effort on the less important sounds. But you have to actually practice it.

One approach that I found useful was to limit the amount of time available to work on any given asset. I would spend no more than an hour or two on any given sound design task before moving on to implementing it into the game, spotting it to the video, etc. I’d loosely plan my approach to get it out there in the wild within the time I’d allotted myself, and when it made its way into the wild, I would try to resist telling people that it was a work in progress and that there was more work that I already had in mind to make it better. It worked pretty well for me.

I know there are plenty of other people out there that struggle with this. I’m curious if anyone else has come up with other approaches. If you have, please leave a comment for the rest of us.

]]>http://creatingsound.com/2013/09/the-threshold-of-pride-and-shame/feed/0Wait Your Turn (VO Session Tips)http://creatingsound.com/2013/09/wait-your-turn-vo-session-tips/
http://creatingsound.com/2013/09/wait-your-turn-vo-session-tips/#respondWed, 11 Sep 2013 13:24:52 +0000http://creatingsound.com/?p=4707In light of the recent SUPER UPDATE TIME, the group has talked about doing shorter blog posts. After all, we’re not a large group and writing long entries takes time that can result in long gaps between posts. Additionally, I hope that these brief musings might serve as points of discussion within the audio community. Let’s jump right in!

So the fantastic VO artist, DB Cooper, has an entry on Designing Sound titled Sounding Real: Directing VO for Games. If you’ve not read it and you are new or newish to this VO direction thang, you should definitely give it your eyeballs. It’s informative, concise, and hits on some great points about helping make a session go splendidly. If anything, you should see the picture of her in her hat. MAN THAT HAT.

The first part of the article focuses on authenticity in delivery and it’s all great insight and advice. Again, please read it. However, the second half stood out to me, especially when I took in this piece of info:

During the actual session:

Interrupting an actor may seem like a time-saver, but what it really does is alarm the actor and undermine his confidence which can lose you a strong performance, or it will irritate the actor and make him less likely to be receptive.

On the count of interrupting an actor during a line read, George Hufnagl has been found… GUILTY! Yes, when I first started VO direction/coaching, I made this mistake, because like DB says, I thought that the actor needed to know what I had in mind right this second or it will be completely forgotten! WRONG. I admitted my wrongdoing on Facebook and we had a nice little exchange.

She’s totally right, too! No one likes to be interrupted while they’re talking in real life conversation and the same goes for a VO session. It throws a wrench in the works, the actor might get frustrated, and then the chi of the session is all off. I’ve since learned from my mistakes and really just needed to take a beat with my thoughts. In addition, when you have two or more people monitoring the session, listening and waiting to respond is ever more important to the rhythm of the process. It’s really about etiquette and being a good listener. The title of this collection comes to mind.

Luckily, this is simple to fix! The easiest solution I’ve found is to format your script in such a way that you and the actor can write your thoughts down as needed. This is by no means definitive and I’m sure there are variations on this theme, but here is an example of what works for me.

LINE – the naming scheme used to differentiate text. In addition to numbers, you might use some other system to help with naming your files during editing, whether that’s you or someone else.

PHRASE – the text read by the actor

NOTES – this is where you jot down your thoughts instead of interrupting. Whether it’s to mark a line for which you’d like another take, a word you thought sounded odd or some other issue entirely, make a note here and wait for an opportunity to share your thoughts.

Additionally, when working with a digital copy, as might be the case in a remote recording session, I’ve found tracking information in Google Drive to be a fantastic solution. When recording with a VO colleague remotely in Australia, I created a spreadsheet doc with Google Drive, which contained all the information you see above. The actor also had access to this doc during the session. It allowed us to see notes as they were written down and comment on the fly. Super great time saver! Just be sure you mute your keyboard while they read, of course.

It’s a very simple solution, but to keep the wheels moving forward, fluid communication is key. After all, we just want everything to go smoothly and on time. Happy recording!

If you’d like a copy of the sample document above, feel free to download it here (Microsoft Word Doc).

(feature image by Kristine Dinglasan)

]]>http://creatingsound.com/2013/09/wait-your-turn-vo-session-tips/feed/0Super Update Time!http://creatingsound.com/2013/09/super-update-time-3/
http://creatingsound.com/2013/09/super-update-time-3/#respondTue, 03 Sep 2013 17:34:03 +0000http://creatingsound.com/?p=4702Dear audio friends hither and yon, it has been a while since we last spoke.

We hope that you miss us as fondly as we miss you. But don’t fret! We have excuses reasons.

We thought it would be a good time to give you a packet of catch-you-up, or catch-up, or ketchup.

I started last time, so I’ll go last this time.

Bryan’s Ketchup

The past few months have been a real change-up in my normal schedule. In June, I walked at Commencement, ending my time at UC Irvine, save for two classes over the summer. I then moved home, while commuting to those classes. Now that summer is over, though, I’m done and have finished my B.S. in Computer Science!

The majority of my summer was spent on the two classes (Compilers and Cryptography…), and reaching out to others to learn how to do freelance sound design before I find an entry-level job. Outside of that hunt, I celebrated the one-year anniversary of Sonic Backgrounds, made my fourth music video (PVC pipes this time!), bought a 3DS to celebrate graduation, and became a certified bartender!

Overall, I’ve now entered into the “fresh college graduate” category, meaning it’s all about the job hunt, and taking all sorts of opportunities. Excited to see where this path takes me!

George’s Ketchup

The past few months have been a flurry of audio activity! In addition to working on several apps, games, and various side projects, one of the recent highlights has been working with fellow CS’er, Christian, on our app, Pocket Audio Tools. Starting with a brief exchange of ideas on Twitter to full production soon after, it was our first of hopefully many partnerships. We have plans to support this “Swiss Army Knife” or audio tools for the next year and are open to feedback from the community. You can learn more about the app and see it in action with my hands-on demo over at 148 apps.

Christian’s Ketchup

Over here in programming land, things have been full of activity and learning and fun (in no particular order). At the last Super Update, George and I had just begun work on an audio toolset app that got its start over a seemingly conventional Twitter conversation. The app is now completed and released on iOS (currently on version 1.1.2). Though most of the programming challenges involved UI and usability-related matters, I did get the chance to write a tone generator to include in the app, for which I interfaced directly with Audio Units, the lowers audio layer on the iPhone. It’s been a great learning and collaborative experience, and we both look forward to support and expand the app further.

In other audio programming news, I was contacted by a company to write reverb code that would be included in an A/D unit. Sadly, this occurred just as I was leaving on vacation for two weeks. Though I did manage to complete two versions of the reverb, of which their feedback was positive, I was unable to polish it up in the time that they needed. Instead, I will refashion it into a plug-in that I will offer up on my audio programming blog for free at some time in the near future.

Roel’s Ketchup

It has been great putting the finishing touches on Saints Row IV and getting it out the door. Reception has been excellent and the team is proud with spirits high. Although it has been more of a time of rest and relaxation, there has been a shift to DLC and developing next-gen audio tools. The future looks bright with many exciting things in the pipe.

Aside from work, I have been trying out different non-audio activities like horseback riding, hiking, and archery. Basically skills that will set me up nice for the zombie apocalypse.

Ariel’s Ketchup

Hey everyone! It’s great to talk to you again after such a lapse! I want you all to know that we haven’t completely dropped off. All of us at Creating Sound are always quacking behind the scenes to figure out what the best thing is to do with this web site. We’re still finding our way. We’re still figuring out how we can reduce the barrier of entry to getting content on this site for not just ourselves, but for future contributors like you. And we do have some ideas. Some are even good! Maybe.

Besides that, I’ve lined up a couple of cool articles and interviews from guest contributors, and I also have a couple articles of my own a-brewin’. I think you guys are going to like them.

And that’s it! Some ketchup for you guys to snack on.

Also, a friendly reminder that we’re always looking for people to contribute to the site. It doesn’t matter what flavor of audio you want to write about, we’re pretty open to all manner of subjects. Just contact any one of us and we’ll help you make it happen.

XOXOXO
The Creating Sound Crew!

]]>http://creatingsound.com/2013/09/super-update-time-3/feed/0Sonic Backgrounds – Year 1 Reviewhttp://creatingsound.com/2013/07/sonic-backgrounds-year-1-review/
http://creatingsound.com/2013/07/sonic-backgrounds-year-1-review/#commentsThu, 18 Jul 2013 16:28:38 +0000http://creatingsound.com/?p=4676A year ago today, Sonic Backgrounds was released as an interview series that focused on audio degree graduates and the curriculum they learned. Since then, I’ve had the opportunity to interview 15 audio friends from 11 different colleges around the world. From the Vancouver Film School in Canada to Leeds Metropolitan in London, or the University of Edinburgh in Scotland to Universidad Javeriana in Colombia, we’ve cast a nice wide net over the audio education world!

Throughout these interviews, I’ve asked questions to get a glimpse of what audio degree programs are currently offering, and to see what led the graduates to choose their degree. Below, I summarize six topics I’ve found interesting from this first year of research!

Degree Itself (What’s in a Name?)

Sound design, and audio work in general, falls into a fusion of arts and sciences, and even though many programs seemed similar in curriculum, there is a split on whether the degree is a Bachelor’s of Arts or a Bachelor’s of Science. Of the 15 degrees, 8 were BA’s, 4 were BS’s, two were vocational degrees, and one was a Bachelor’s of Music.

The name of the degree also ranged in variety, from “Music Technology” to “Sound for Visual Media”, or “Interactive Arts & Media: Video Game – Sound” to “Creative Music Production & Technology”. Whether the title was a mouth-full or not, the audio skills gained were mostly similar across the board. The one difference is that if a title mentioned technology, or was a BS, it was more likely to have projects of the Max/MSP, Pure Data, and Processing variety.

It’s also worth pointing out that four of the interviewees completed their Master’s degrees in audio related programs. It was in these programs where the most music technology experimentation was encouraged, as students pursued their own creative dissertations.

Age of Discovery – Age of Decision

One fun question I used to learn more about the graduates themselves, was to figure out at what age they decided to go into sound as a career. The results were pleasantly entertaining!

For half of the graduates, they knew they wanted to pursue sound once they hit their teenage years. Or perhaps I should say they knew they wanted to be rockstars when they were teenagers! Whether drummers or guitarists, these rockers knew sound was in their future. This dream continued until the early college years, when it was time to choose a degree. It was here that many decided to pursue the music recording, sound design, or music technology fields in an educational and career mindset. It didn’t mean to lay down the axe though! As audio people, we always seem to have our inner musician going; a creative outlet that needs to be fed occasionally!

The other half also decided to pursue audio as a career in the college era. Whether it was a casual interest or an attempt to try something new, these graduates discovered sound in school, realized it was something they loved, and hung on to it to the end!

Prior Degree

Since not all the graduates knew audio was what they wanted to do until a little later in life, some had the bonus of earning other college degrees, an exciting opportunity to blend different fields into the world of audio, and make each graduate stand out in a unique way. For example, Roel has a BS in Chemistry, Brendan a BA in Spanish, and Sean a BS in Psychology!

Class Size

One thing true across the board was that all of these audio degrees were small. In each degree, there would be about 20-35 students on average, of which anywhere from 4-8 were actually focused on game audio. Granted that game audio is a relatively new field, it’s interesting to see how few of the students pursuing the larger general audio field find interest in the interactive side of things. It seems to be the technology/art, left mind/right mind, line that separates the groups. One side focuses on music, production, and linear media, while the other focuses on programming, interactivity, and technological experimentation. It’s great to see how even in our own little audio world we have such diversity!

Tools Learned

Before we even get into this section, you can guess what tool was on every interviewee’s curriculum: Pro Tools. The “industry standard” stands tall at each university’s program, but not alone. Logic, Cubase, and Reason are the next common DAWs to find at an institution, with Audition and Nuendo making appearances as well.

On the hardware side of things, a couple schools make sure to get students behind the console occasionally, and handle mixers, racks, and microphones as well. This is more likely to be true at a vocational school rather than a university.

Then, depending on how technical a program gets, students can be found digging into audio programming in Pure Data, Max/MSP, and Processing. And on the implementation side, colleges cover game engines such as Unity, UDK, and CryEngine, and audio middleware such as Wwise, and FMOD.

Projects / Curriculum

The most interesting question I collected was “What kind of projects did you have in your classes?” I can’t adequately cover all the detail that each of the graduates gave in their answers to this question here, so I encourage you to go check out what they themselves said! But in summary, here’s my favorite discoveries!

The most common project is the sound replacement video. Whether the assignment focuses on sound design, mixing, or post-production in general, this seemed to be a staple project at any of the colleges.

The next common project was to team up with another field, whether for a film student’s short movie, or a game design student’s level. It’s great to see a curriculum cross-pollinate the disciplines at a school, and get the students working collaboratively, as that’s how things work in the real world!

When students worked solo, especially at the more technical schools or in the Master’s programs, projects included game mods, creative audio programming projects, and sometimes small games from scratch, where the audio student is also tasked with building out the level.

An interesting project that a couple graduates described was the full process of finding a local band, recording them, releasing their single, and also interviewing them for a podcast. One school even throws in a music video as well! This I found remarkable, as it not only gets the audio students practical experience, but it builds a sense of community with the local music groups, and gives them a little publicity too!

All of the colleges gave focus to recording sessions, whether in music or VO, and many included music history in the curriculum. And keeping with the classics, if it’s a performance degree, recitals are your projects.

Then, in comparing a university to a vocational school, one might have either algorithmic composition, or the repair of audio equipment and music business contract writing; both very interesting topics!

Shout It Out

This past year has been quite a bit of fun, and none of it would be possible without the wonderful graduates I interviewed. So, allow me to give a big thanks to each of them, and their schools!

I’m looking forward to continuing Sonic Backgrounds in its second year, covering new universities, new students, and possibly new evolutions in current universities! Therefore, if you are an audio degree graduate, or even an audio teacher, feel free to contact me (bryan@creatingsound.com or @BryanPloof), and let’s make some interview magic happen!

I have two prior degrees. One in Psychology and the other in Television: Post-production Editing. While both of these previous degrees have had influence in getting me where I am today, I fell in love with sound design through editing video. While adding sound effects to various video projects, I realized the integral part sound plays in audience perception. For instance, when there is a door in a scene, the sound designer has a chance to tell the audience the weight, material, age, and significance of that door through the use of sound effects.

How old were you when you found out sound is what you wanted to do for a living?

28. I was doing freelance video work around Chicago. I knew that I liked sound, and had a passion for video games since I was a kid. For some reason I never took the idea of a career in games seriously until this point. I decided to pursue what I truly enjoyed and went back to school to get into game audio. It was one of the best decisions of my life.

Was a school degree the first thing on your mind, or do everything self-taught?

I don’t think school is the best route for everyone but a degree was very important to me. College offered me a learning curriculum and provided the necessary tools to create good foundation to start learning my craft.

Bass Boost

What is your specialty/preference of the sound fields (sound design, music, recording, audio programming, implementation, etc)? What do you like most about it?

I love all aspects of sound design, specifically, making sound effects. They are so important in creating something artistic while also serving the function of relating important information to the audience. I also really enjoy the implementation of sounds, field recording, and dialogue. Hearing your work come to life and being able to direct the way in which the audience experiences the content is one of the most important things in game audio.

What sound tools did you learn in your school curriculum?

Early on in the program, we were introduced to Adobe Audition and Sound Forge. However, the curriculum allowed us to use any tools or methods needed in order to get the work done. On any single project, students would use Pro Tools, Reaper, Cubase, etc. In more advanced sound classes we got our hands on the Unreal Engine, CryEngine, Wwise, and Pure Data to name a few.

What kind of projects did you have in your classes?

We had a lot of different projects. In low level courses we were tasked with creating sound replacement for video of game trailers and movies. Then we started modding games and replacing the sounds with our own. Eventually we got into learning the Unreal Engine and we not only had to learn how to plug in sounds, but we had to learn how to create the map and environment as well. This was a very good way to teach ourselves what sound fits where and how the sounds are being triggered within the game. Our senior capstone project, Water Aloft the Ridge, used Unity 3D in conjunction with Wwise.

Were your teachers audio professionals? Anybody the audience would know?

I believe it was a good mixture of both. Tom Dowd is probably the most well known in the game industry. He helped to create Shadowrun and was lead designer for MechAssault!

Plug-Ins

Did you do any side projects during school? If so, what were they like?

I collaborated with a small team of audio students on an interactive story book called Chocolate Attack by Apologue Entertainment. We then went on to complete other side projects for the company. These projects were mentioned in a previous Sonic Background by Roel Sanchez.

I’m not too sure if this would be considered a “side project,” but during my senior year, I fulfilled an internship at Robomodo. I was hired on to do some video work which entailed capturing gameplay and shooting some behind the scenes footage. I did get to add the player sounds using Wwise and the UDK into Tony Hawk’s Pro Skater HD which was a lot of fun.

Towards the end of the internship, I had the opportunity to create sounds for a game prototype that Robomodo was seeking to be funded. This eventually led to contract work on the unannounced title after graduation. After that I completed an internship at Sony. I did sound design and implementation for the recently released Sly Cooper: Thieves in Time.

How many of your side projects were published? Any of them turn profitable?

I hope all the projects I’ve worked on turned out to be profitable!

Echo

How large was your graduating class? Were you all close?

I graduated with around 35 game design students. There was only a handful of game audio students and we were very close. We had many of the same classes year after year and worked together on audio projects up until graduation.

How often do you work with your old classmates today?

Unfortunately I haven’t had a chance to work with any of my old classmates since graduation. I recently moved out to the west coast which places me even farther away from my Chicago roots and my game audio friends. Maybe someday!

Any old classmates you want to mention? The more the merrier with the audio community!

Amplify

Do you feel more prepared for the sound industry than if you had not graduated from your program?

I definitely believe I am more prepared for the sound industry than if I had not graduated from my program. If anything, the program convinced me to work on sound projects outside of my comfort zone as well as providing me the opportunity to learn strong fundamentals regarding the science of sound.

Do you have a website for your portfolio? How often do you blog on it?

I don’t have a blog setup as of now, but I hope to soon. You can find my website and portfolio featuring some of my work at www.SeanClouser.com

Do you use social networking? How often, and what communities?

I think Twitter is pretty cool and the game audio community is unbelievably awesome. In fact, I found out about my internships at Robomodo and Sony through Twitter and have met some amazing people as well.

Fade Out

Any last words for future audio people looking to carve their education and career paths?

While my school assignments taught me a lot about sound, the best experiences were found working on outside projects and internships. It is never too soon to start gaining real world experience. Also, I can’t say enough how important it is to join in the discussions found on the social networks. It is a great environment to open direct dialog between you and those whose work you admire or find inspiring.

About Sonic Backgrounds

The sound industry is an ever growing field, ranging from linear sound design in film and TV, to interactive audio in games, and from live theatrical sound design to field recording for the creation of custom libraries. It is only recently however, that school programs have begun to offer degrees in the sound-specific variety. Graduates of these new programs are now coming into the industry, and it provokes the interesting question of how these new, specific programs are preparing individuals for the sound world, as opposed to the older approaches of entry, such as pure passion, musical talent, a film degree, or a computer science degree.

“Sonic Backgrounds” is an interview series focused on interviewing recent graduates of these educational sound programs around the globe, to see what exactly they are providing, and how they are shaping the new “academic”-based sound artist.