UDK sound explorations

implementing sound in UDK

With no streaming capability in UDK's audio system, all sound referenced by actors and Kismet sequences in a level is loaded into RAM on level load, and will remain there until the level is unloaded. The same applies to all other placeable assets of course, and level designers will usually use Unreal's Level Streaming features to load/unload areas of a map as required, in order to reduce the load on memory and performance, and also reduce initial load times. It’s a good idea to manage audio in the same way.

The benefit of placing audio assets and Kismet sequences in streamed audio-only levels is it gives you very tight rein on what sounds are loaded into memory at any one time - thereby helping you to stay within budget and not exceed your maximum number of channels. It also keeps audio separate from the rest of the game, which allows for greater control over your assets/implementation, and means that the files you need to work on are less likely to be locked. This tutorial will show you how to put your audio assets in designated audio levels, and control the loading and unloading of these assets using level streaming volumes.

In a nutshell:

Make a new level under the Persistent level and populate it with sound actors and Kismet sequences

Place a streaming volume over the level

Attach streaming volume to level

Level is only loaded while player is inside volume

How does it work?

There are several ways to load and unload levels during gameplay (including Kismet based and ‘distance from viewer’) - I'm going to use Volume based streaming as it seems to be the most robust method, and it's simple to set up and make changes to as your project progresses.

The image above shows a basic map containing 3 distinct areas, Farm (brown), Forest (green) and Industrial (grey). I’ve made a new level for each area and populated it with some simple ambient sounds. The three coloured circles show the maximum attenuation range for the sounds, and the 3 red bands show the perimeter of the streaming volumes associated with each of the levels. When the player touches a streaming volume, the level associated with it is streamed in so that by the time the placed sounds contained within it need to become audible, they are already present in memory. This obviously means that your player must enter the volume in a place where the loading/unloading of sounds is not audible - ie. the boundaries of the streaming volume must extend beyond the max attenuation radius of any placed sound actors.

(Theoretically you could place a sound into its own level with a cylindrical streaming volume placed just beyond the sound's max attenuation radius - effectively meaning that when that sound is out of earshot, it is unloaded (this would be a very useful out-of-the-box feature btw, Epic ) Unfortunately this isn't very practical because streaming volumes carry with them a small performance hit; this is something to keep in mind when planning your levels.)

Level load times will depend on how much audio is referenced in the level; you'll need to experiment with this to make sure you've left enough time for the level to load before any of its audio is needed.

Steps to set it up:

Add some levels under the Persistent level:

Goto the Levels tab in the Content Browser

In the Level menu, select 'New Level'

Give it a name relevant to the area (using a prefix like 'AUDIO_' is a good idea too)

Leave 'Kismet' selected for the streaming method

Repeat these steps for your other areas

Save all levels

Add your content:

Right click on one of your levels and select 'Make Current' (or double-click it)

Now anything you add in the viewport will be placed in that level, so just add some ambient sounds as you normally would

If you've already placed sounds in the Persistent level, you can select them and Cut+Paste them into the current level

Any Kismet sequences you make will also be placed in the 'current level'

Add the streaming volumes:

Always place your streaming volumes in the Persistent level

Using the builder brush, create a box (or cylinder/polygon etc. if it fits your level layout better, I'm using cyclinders) around one of your areas, leaving enough extra space around the max attenuation radii of your sounds to allow the level to stream in before your sounds start gaining

In the Levels tab in the Content Browser, right click on the level you want this volume to load, and select 'Add Streaming Volumes' (You can add more streaming volumes in this way, or clear them all and add a new one using 'Set Streaming Volumes'

You can see which Streaming Volumes are attached to a level by right-clicking the level and 'Edit Properties'

Do the same for the other streaming volumes

Monitoring the results

Unreal offers a number of detailed resource monitoring tools for audio, I'll be using the following:

Audio Memory Used - Total current memory allocation for soundwave dataWave Instances - How many soundwaves are currently playingWave Instances Dropped - Number of soundwaves not currently playing as a result of exceeding the maximum of number channelsAudible Wave Instances Dropped - Number of soundwaves not currently playing as a result of exceeding the maximum of number channels, which would otherwise be audible

You need to display the editor log window for this console command to work. Add the -log switch to the Target path of your editor desktop shortcut:

e.g. G:\UDK\UDK-2012-05\Binaries\Win64\UDK.exe editor -log

When you execute this command you'll get a log printout of all soundwaves currently loaded into memory, with these headers:

NumBytes/MaxBytes - these seem to relate to the object container, hence very small sizes so disregardResKBytes - Resource KBytes probably? Size of the soundwave dataTrueResKBytes - presumably this is ResKBytes - NumBytes

I made a quick demo to illustrate load/unload between 3 areas in a map - here's a video. (I blacked out half the screen so the memory stats would show up better).

I've been learning more UnrealScript recently, so I decided to try making a Kismet node that would open up the Continuous Modulator SoundCue node.

For the indie developer or student, the Continuous Modulator node can be something of a dark art. It holds the key to some powerful implementation resources, but it's relatively undocumented, and it's nigh on impossible to use without code support.

It contains a Parameter Name property which can be hooked up to a float generated in code, which can then be used to control the Pitch and Volume of a SoundNodeWave in a SoundCue. If you open up any of the engine loops in UDK (e.g. SoundCue'A_Vehicle_Manta_UT3g.SoundCues.A_Vehicle_Manta_EngineLoop') you can see a working example.

Normally a programmer would write code for a specific use of the CM node - this custom Kismet node will open it up a little and hopefully allow you to apply it on a more spontaneous basis.

Here's a list of the features:

• Spatialization (SoundCue can be located in 3D space and attenuated)
• SoundCue can be attached to an actor Source
• Use Source actor's velocity as input value for the CM node
• Use distance between 2 actors as input value for the CM node
• Use a Float as input value for the CM node

Here's the node and its properties:

Sound Cue: Drop your SoundCue in hereParam Name: This is a unique reference to link this node and the Continuous Modulator node togetherUse Source Velocity: Check this box to use the velocity of an actor attached to the Source variable link for the Continuous Modulator input valueUse Distance Between A and B: Check this box to use the distance between two actors attached to the A/B variable links for the CM input valueUse Float: Check this box to use a Float variable for the CM input valueEnable Spatialization at Source: Locate the sound at the actor attached to the Source variable linkFade In: Time in seconds to fade the sound inFade Out: Time in seconds to fade the sound outShow Velocity in Log: This is useful for seeing the speed of your moving source, so that you can set the Min/Max levels in the Continuous Modulator accurately

Just drop it into your game's custom scripts folder - if you're not sure what that is, you can put it in: \Development\Src\Engine\Classes\

If you use it please leave a comment below to say how and what for

Here's a little demonstration video showing a few things I tried out:

Notes on the demo video:

1. Mover

Let's say you have an actor moving around in the game and you want to use its speed to alter the pitch and volume of its sound.

First off, set up your SoundCue. I'm using 4 different SoundNodeWaves so that as well as altering the pitches to reflect changes in speed, I can also use the volume param to fade up/down specific sounds depending on the speed. For example, I have an 'Idle' sound that I only want to hear when the speed is between 0 and 1000.

You can see that I've called the Parameter Name 'PodParam' - this can be named anything - it's just a unique reference that we'll be using in the Kismet node to send our Param data to the right Continuous Modulator nodes.

Next, set up the Kismet. In the image below, I'm attaching my SoundCue to a static mesh that's being moved around by a Matinee movement track. This is just one way of moving an actor of course - if you want to find out more about Matinee, here's a good tutorial.

2. Random Pitch Modulation

This demo shows how you can loop tiny waveforms (in this case a single triangle wave oscillation, 0.001s long) and alter their pitch using the Float variable. I'm using the Set Variable node to change the Float value to a random value between 0 and 10 (using the Random Float variable), every 0.1s.

3. Dynamic music

Having the ability to use game events to control the volume level and pitch of a SoundCue makes for a lot of options when planning a dynamic music system. In this example I've split a simple music track into 5 looped stems (3 synth parts and 2 drum parts) each 10 seconds long. In my level I've made 5 coloured bands on the ground, and placed a Target Point actor in the middle of each band. Each stem is assigned to one of the Target Points, and as the player approaches a Target Point, its assigned stem fades up. When the player reaches the Target Point, the volume is locked at full volume until s/he turns around and goes back toward the starting point. This effectively means that as you run along the game path, the music ramps up in intensity, and as you run back the music intensity drops down.

It works by using the 'useDistance Between A and B' parameter while the player is approaching a Target Point; then when the player reaches the Target Point, 'A to B' is switched off using a Trigger Volume (so that the volume level doesn't drop again as the player runs past the Target Point) and the useFloat parameter is switched on, to keep the volume level fully on.

Note that in order to keep stems playing while they're 'switched off' it's neccessary to set the volume to 0.001 instead of zero. Setting the volume multiplier to zero will stop the SoundCue. A volume multiplier of 0.001 should be close enough to silence for most applications.

This is a very basic demo just to show how volume levels can be controlled - it's pretty robust as long as you stick to the middle of the coloured bands (essentially my game path), but I'd recommend you spend some time developing your own creative solution!

4. Micropolyphony for beginners...

Just a quick example of what can be achieved using tiny waveform loops, amounting in total to less than 8kb, good for projects with a tight memory budget! I'm using the Float variable to modulate the pitch of 6 (not 7 as it says in the video!) triangle waves. Each waveform is placed in its own SoundCue (one of which can be seen below right), and each Continuous Modulator node has a different min and max value for the pitch - also, the time taken for one complete modulation between these pitches is slightly different for each. This results in a nebulous web of unpredictable waveform interactions...

This is a little system I put together to orbit sound nodes around the player. I was thinking about ways to place ambient sound sources in order to achieve a wide and randomised spread of ambient sound.

The basic idea is to use trigonometry to move AmbientSoundMovable actors in elliptical orbits at different speeds and directions, and for these actors to play back soundcues containing arrays of randomized ambient sounds. Potential uses for this are survival horror (where you might want to increase tension using creepy off-screen sounds that originate from unexpected sources), open world adventure (e.g. a forest scenario filled with small animal movements, leaves dropping, branches brushing against each other etc.). And indeed any scenario that requires 360 degrees of moving sound.

First off, you need to gather your sounds. I've used 3 ambient sounds for this demo, which takes place in a dark cavernous space: little water droplets, monster sounds and critter movements and sounds. I set up the soundcues in roughly the same way for each - here's how I did the monsters:

There's a looping node in the cue so that the sounds play constantly, and I've added some delay nodes to put space between each sound. For the monsters, there's a random delay of 5-10 seconds between each sound playing. The critters trigger more frequently, and the water droplets most frequently. (The mixer's just there to tweak the levels on some of the sounds).

Once you've got your soundcues set up, add them to your level somewhere as AmbientSoundMovable actors so you can reference them in Kismet. It doesn't matter where you put them, in my demo I've placed them off the edge of my map, out of earshot since I'm not setting up any stop/start sequences for now, and the sounds will start playing on level load.

Once the soundcues are set up and the actors are in the level, it's time to look at the Kismet. The principle is fairly straightforward, I'm using the Set Actor Location node to place each sound actor at a point on the circumference of a circle, where the player is the centrepoint. I'm looping this routine and each loop increments the angle between the player and the sound actors by a small amount, which effectively moves them round the player like stop-frame animation.

The maths to find points x and y on the circumference of the circle (for placing the sound actors) is governed by this parametric equation:

x = cx + r * cos(a)
y = cy + r * sin(a)

Where cx and cy are the origin (player x,y co-ords in this case) and 'a' is the angle between player and sound actor.

There aren't any trigonometric functions exposed in Kismet, so I wrote some custom actions which you can download here. Put them in the /development/src/engine/classes directory in your UDK installation and they should show up in the Math section of your Kismet actions.

To get cx and cy, use the GetActorLocation node to get the player location as x and y co-ordinates. You can't get the location of the player itself, so you need to make a very small DynamicTriggerVolume using the builder brush and place it over the PlayerStart actor in your level. You can then attach the DynamicTriggerVolume to the player in Kismet, and get its location using GetActorLocation, to effectively get the player location.

'r' will determine the radius of each circle - ie. the distance between player and sounds.

'a' will determine the rate at which the actors move round the player - big values = bigger angles = faster movement.

You can watch a video of the system in action here. The first half of the clip is a basic demo showing where the sound actors are when in orbit, (without sound) the second half shows a potential use for the system (with sound).

Thanks for reading, post in the comments if you have any questions or tutorial requests!

Summary of the project:
There's an event in a project I'm working on where rocks (KActors in this case) fall from the top of a cliff and land in a small river, so this next foray into UDK sound is about water collisions. I thought it would be fairly simple to set up the UTWaterVolume with a Physical Material that had an impact sound assigned to it, but after hours of trying out different combinations of collision properties for the KActors and water volume, I decided to do something custom in Kismet instead. This would also allow me to trigger different sounds for large, medium and small impacts, as well as trigger exit splash sounds and lateral sloshing sounds.

My basic plan was to make sounds (6 variations of each) for large, medium and small occurences of 3 splash types: water entry, water exit and in-water lateral movements. Then I would add these sounds to my level somewhere off the edge of the map so they're out the way, and use trigger volumes in Kismet to move the appropriate actor to the splash location when needed.

Setting up the water and trigger volumes
For setting up the water, I recommend watching this tutorial by Raven. I also added some particle effects for the water splash. Once you've got that set up you need to add 2 trigger volumes, one for triggering entry and exit splashes (hereafter referred to as TriggerVolume_1) and the other for triggering the lateral movement splashes (TriggerVolume_2). TriggerVolume_1 should cover the entire water volume, overlapping as closely as possible. TriggerVolume_2 should be as wide and long as the water volume but only 16 units deep, and should be placed just below the surface of the water. The images below show the green trigger volumes in the viewport - I've separated them out a bit from the water and reverb volumes to make it easier to see what's going on.

Adding the actors
I added each of my sound waves as AmbientSoundToggleable actors and placed them just off the edge of the map. I used the barrel static mesh that comes with UDK for my KActors and placed them next to the water so I could toss them into the water with the physics gun. (To add assets to your map, select the asset in the editor, right click in the viewport, select Add Actor, the listed menu items should include some suggested actors for your asset.)

There's a setting in the KActor properties that needs to be changed in order to register a touch event in Kismet: select the KActor in the viewport, hit F4 and find the No Encroach Check property in the Collisions section - uncheck it.

Kismet
Here's an overview of the Kismet sequences I made for this project:

To get the Trigger Volumes to work, you need to set the Class Proximity type to whatever it is you want to make the splash (could be 'Actor' if you want to include everything) and untick Player Only. This is done in the Trigger Volume properties in Kismet:

Here's a rundown of what's happening in each sequence:Entry Splashes

1. When TriggerVolume_1 is touched by the KActor, assign the KActor to an object variable, get the location of the KActor and store it to a vector variable.
2. Create a new vector variable by taking the co-ordinates of the first vector and adding 30 units to the Z co-ordinate - this is so when we move the sound actor to the splash location, the sound always plays above water, ie. isn't affected by the reverb volume's low pass filter.
3. Get the KActor's speed and compare it against 3 ranges of values, for small, medium and high impacts (the screenshot below only shows the sequence for medium impacts, the range of values for small impacts is greater than 80 and less than or equal to 400, and the range for large impacts is greater than 700.)
4. Next up is a random integer node for values 0 to 5, for our 6 medium splash sounds, followed by a repetition avoidance routine which stops the same sound playing twice. After that there's a couple of comparison nodes to determine the number chosen by the randon integer node.
5. The next step in the sequence moves the selected sound actor to our pre-defined location and plays it once.

Exit splashes
The exit splashes work in the same way but are triggered by the Untouched output on the Touch event node.

Lateral movements

1. When TriggerVolume_2 is touched by the KActor, capture its velocity and compare it to a very small threshold value to determine whether or not it's actually moving.
2. If the actor is moving, wait a fraction of a second (0.03s) and capture the velocity again.
3. Get the location of the KActor for use later on.
4. Subtract the second velocity reading from the first velocity reading to get a rudimentary idea of the acceleration (if I ever go back to this project I'll use a proper equation that includes distance and time!).
5. Create a new vector variable by taking the co-ordinates of the first vector and adding 100 units to the Z co-ordinate - this is so when we move the sound actor to the sloah location, the sound always plays above water, ie. isn't affected by the reverb volume's low pass filter.
6. Compare the acceleration value against 3 ranges of values as before to determine which set of slosh sounds to move onto, small, medium or large. (again, the screenshot above only reveals the sequence for medium sloshes).
7. Run through the random number generator as before, and move the selected sound actor to the predefined location.
8. There's an extra step in this sequence, which uses a simple bool routine to stop a currently playing sound from being triggered again. This prevents a sound being cut off prematurely when the same sound has previously been triggered and has just been stopped.
9. If the actor 'Untouches' the trigger volume and is below it, play a 'glug glug' sinking sound.

The Reverb volume
The reverb volume provides a low pass filter on sounds originating outside the reverb volume while the player is listening from inside it - this is applied to all splash and slosh sounds, since they are deliberately positioned just outside the reverb volume. It also adds a LPF to sounds originating inside the reverb volume while the player is listening from outside - in this case it gets applied to the glug sound played when an object is sinking. There's also some underwater reverb. Here are the ReverbVolume settings:

Summary of the project:
I wanted to see if I could use Kismet to make a pseudo Doppler effect. My initial plan was to use the Doppler effect equation to adjust the pitch of a sound emitting object as it passed the player on any axis. However, it soon transpired that this was beyond the default functions in Kismet (I didn't know at the time that I could write my own Kismet classes) so I decided to limit the direction of travel to the x-axis, and go for a something that sounded more or less like the frequency shift you'd get with a real life Doppler. I hope to go back to this sometime to get it working on all axes, and use the equation in real-time to affect the pitch; it should be achievable using UDK with SuperCollider for example.

Setting up the moving object, sound and player:
I started off by adding my moving object and sound to the world. I found a suitable static mesh in the content browser and added it to the world as an InterpActor, then I added a siren sound as an AmbientSoundMovable, overlaying it on top of the InterpActor.

Here they are together in the viewport:

To get the sound to move wherever the object moves, I hooked them up using the Attach to Actor node in Kismet and linked that to a Player Spawn event node, so that they link when the player spawns. I then created a matinee sequence to move the object between two points so that it would fly back and forth over the world. Finally, I placed a DynamicTriggerVolume over the player spawn point and attached it to the player, so that I could use it to get player location co-ordinates in Kismet later on (I don't think it's possible to get co-ordinates directly from the player in Kismet).

Here are the Kismet sequences for the player spawn actions and matinee trigger:

Gathering location and direction data
For simplicity, I'm only applying the pitch shift when the object is moving away from the player, as this is when it is most noticeable. Also, the shift amount is fixed, ie. I'm not taking into consideration the relative movement of the player. This project only aims to give the player an impression of a Doppler effect at this stage. To do it properly, here's a list scenarios you'd need to consider:

• The object is moving away from the stationary player
• The player is moving away from the stationary object
• The object is approaching the stationary player
• The player is approaching the stationary object
• The object and the player are moving towards one another
• The object and the player are both moving away from each other
• The object and the player are moving in the same direction at different speeds

To know when to apply the pitch shift, I needed to know where the object was in relation to the player and in which direction it was travelling. With that information to hand I'd be able to tell when the object was moving toward the player; at the exact moment it started moving away from the player again, the pitch adjustment could be applied. To get the location co-ordinates, I used the Get Location and Delay nodes in Kismet to capture the player and object locations every 0.02 seconds. I used the Get Distance node to deduce the direction of travel by chaining two nodes together, separated by a delay of 0.01 seconds, and comparing the two figures (the object was attached to input A of the nodes and the DynamicTriggerVolume was attached to input B).

Now all I had to do was set up four Gates in Kismet for my reduced set of four possible scenarios, and hook the inputs up to the outcome of the calculations above:

• Object moving toward its point of origin and player = reset the pitch to normal
• Object moving away from its point of origin and player = lower the pitch
• Object moving away from player and toward its point of origin = lower the pitch
• Object moving toward player and away from its point of origin = reset the pitch to normal

You'll notice a flaw in that after the pitch is lowered, there's nothing bringing the pitch up again until the object stops, either at its point of origin or at the opposite end of the matinee track - again, something for a future project... (The attenuation on the sound cue is cunningly set so that the object is out of earshot before any noticeable pitching up would need to occur anyway )

Here's the Kismet sequence for getting the locations and direction:

Applying the pitch shift
I used the Modify Property node in Kismet to change the pitch multiplier of the siren sound cue. Ideally, you'd be able to calculate the amount of pitch adjustment required in real-time and set the property value in the Modify Property node dynamically - this last bit isn't possible though. So instead, I pre-calculated what the pitch adjustment would be and applied it by chaining together five Modify Property nodes, separated by short, gradually lengthening, delays.

The Modify Property node can be fiddly at first - it took me a little whiler to get the right info in the right fields. Find the sound cue or (whatever you're modifying) in the content browser, right click it and Copy Full Name to Clipboard - paste that into the target field. Then look at the properties of the sound cue, find the property you want to modify (e.g. Pitch Multiplier) right-click and Copy Selected Property to Clipboard - paste that into a text editor and it should be fairly obvious which bit you need to enter in the property name field - in this case it's PitchMultiplier (note there's no space).

Here's the Kismet sequence for adjusting the pitch:

Improvements
As mentioned, this is a fairly limited Doppler, but it sounds pretty realistic in-game, and I learnt a lot about Kismet in the process! I'd like to get it working on all 3 axes for any speed, and add the player movements into the equation (not that that would make much difference to the pitch shift amount, unless maybe the player is buzzing around in a vehicle...).

I set up this demo so that I could look at how loud music could be represented inside and outside of an enclosed space, using reverb volumes and strategically placed sound actors, and to attempt a method for volume ducking. The objectives were:

• Standing outside the building with the door closed, the player should hear the bass frequencies of the music, plus about 10% of the rest of the frequencies (this isn't a scientific representation of how sound travels through concrete and metal, but it sounds about right in-game). The player should also hear rattling from the metal panels on the building exterior, suggesting very loud bass-driven music inside.
• When the door is opened, the music from inside in the building is affected by the reverb volume and processes described in the previous post - ie. we hear it at full volume. Also, the volume of the rattling sound and bass frequencies we heard with the door shut are reduced to 20%.
• When dialogue cues are triggered inside the building, the music volume is ducked.

What I did:
I had the option of using the LPF properties of the reverb volume to exclude all but the bass frequencies while the player is outside the building, however I decided that the LPF attenuation would kill too much of the overall level. My intention was to intrigue the player with loud booming bass, so intead I made a bass-pumped version of the music cue in Pro Tools, and placed it just outside the reverb volume using an AmbientSoundSimpleToggleable actor. This version of the cue plays in sync with the original version so that when the player opens the door, all music frequencies appear to be coming from the same source.

To make the rattling sound for the metal panels, I made some recordings using a metal shelving unit and pitched them down a little. I then made a new track in Pro Tools alongside the original music track and placed the rattling sounds just after the bass drum beats until I had a track of rattling the same length as the music cue. I placed 1 of these rattling cues next to each metal panel. They are also played in sync with the original music cue.

Finally I placed the original music cue inside the reverb volume in the centre of the building.

Here's where I placed the sound actors:

Here are the reverb volume settings:

And here's the kismet sequence sequence for turning everything on and off:

Although I'm aware of the volume ducking features built in to UDK, to get this part of the demo to work I decided to explore a different route using the Set Actor Location node in Kismet and some carefully tweaked attenuation. This was because it appears that reverb volumes only act on sounds placed in the Ambient class, which then excludes them from the established method of volume ducking.

Whenever a dialogue cue is played, the AmbientSoundSimpleToggleable actor for the music cue is relocated 4500 units away along the x-axis from wherever the player is located. The attenuation node for the music soundcue is set to min-radius 2000 and max-radius 5000, so that when the music sound actor is 4500 units away, the level drops sufficiently to allow the dialogue to cut through. When the dialogue cue finishes or is stopped, the music cue actor returns to it's original location. This means the player can run around during dialogue and he/she won't notice a change in volume while it is ducked. In my example, the cues are set to play one after another at 2 second intervals, but the same procedure could be applied in a more dynamic scenario where dialogue cues are triggered by the player or an event.

Here's the Kismet sequence:

It doesn't seem possible to get vector co-ordinates for the player object, so I had to first attach a DynamicTriggerVolume to the player on spawn, and then get the co-ordinates from that.

There's a video for this demo titled 'Reverb Volumes', in the UDK demo reel here.

About the sound effects:
The music track is something I made specifically for this project; I recorded the rattling using a metal shelving unit; the dialogue examples are taken from a reading of Philip K. Dick's short story "Beyond the Door" - read by Gregg Margarite. Available here in the public domain.

Summary of the project:
I wanted to implement sound effects and ambience for a sequence where the player opens a door, moves into a room and then closes the door:

• While standing outside the room with the door closed, the player should only hear the outside ambience (wind in this scenario)
• When the player comes within a certain distance of the door, the door should open and the player should hear a door open sound (with reverb from the room) and some room tone.
• When the player moves into the room, they should hear reverb on their footsteps, room tone, and wind from outside.
• When they move a certain distance beyond the door into the room, the door should close and the player should hear a door close sound and only room tone.

What I did:
I built a basic enclosed space and added a sliding door open/close sequence to one end. To activate the door open/close sequence and play the sounds at the right time, I made a Trigger volume the same width as the door and set the length so that it would trigger as the player approached. For the reverb, I built a Reverb volume with dimensions to match the space, and laid it over the room. In order for the player to hear the reverb effect as soon as the door opens, the door trigger mechanism and Reverb volume needed to be touched simultaneously, so I extruded part of the Reverb volume until it enclosed the external portion of the Trigger volume (leaving a small margin due to the difference in the way these two volumes seem to trigger). Finally, in Kismet I hooked the Trigger volume up to the Matinee door sequence. The screengrabs below show the enclosed space and Reverb/Trigger volumes.

(For a while I struggled with Reverb volumes because I wanted to be able to create a volume and subtract from it to create shapes more complex than the standard brushes. Then I discovered Geometry mode and found I could extrude to my heart's content! here's a good tutorial)

Now I had the basic mechanics working, I recorded and edited sounds for the door open and close animation; grabbed some wind and room tone ambience from a library CD; and imported everything into UDK. For the door sounds, I created AmbientSoundSimpleToggleable nodes and assigned my door sound recordings as SoundNodeWaves. I intended to control the volume of the ambience sounds using the ModifyProperty action in Kismet, so I made SoundCues for the wind and room tone. (Unfortunately since SoundNodeWaves have no VolumeMultiplier property, it seems that you can't control the volume of a toggleable sound in this way - ModifyProperty doesn't seem to be fully implemented in the public release of UDK).

This screengrab shows where I placed the sound nodes:

Next I added some Toggle actions to my Kismet sequence for the door sounds. For some reason, to get the sounds to play properly I had to enable Looping Sound in the wave's properties. Since I didn't actually want the sounds to loop, I made some Delay actions and set them to the length of the sounds, then placed them between the Touched output on the trigger and Turn Off output on the toggle, so that the sound would be stopped after playing once.

There are 2 sounds for the door close event, one is located inside the Reverb volume for triggering when the player is inside the room, the other is located just outside the Reverb volume so that it does not have reverb applied (I also changed the end of this sound to give it a quicker release time). The Kismet sequence below determines which of these door close sounds to play, by getting the X coordinate of the player and comparing it to the X coordinate of the door.

The last thing was to attenuate and increase the levels of the ambience sounds depending on whether or not the player was inside or outside the room and whether or not the door was open. Again, I used the player location calculation to trigger a ModifyProperty action to change the VolumeMultiplier property of the relevant soundcue. Then, in an attempt to simulate a fade congruent with the speed and trajectory of the sliding door, I daisy-chained several ModifyProperty and Delay actions together.

Here's the Kismet sequence in full (the player co-ordinates are being pulled in from a separate sequence off-screen)

It works pretty well - it's not terribly elegant in places, but maybe I'll refine it as I learn more

About the sound effects:
I recorded and processed the door open and close effects using various heavy metal objects I found in the garage, including an old metal sun lounger, an axle stand and a grinder. I used Reaper running on my netbook, with an Mbox and a Rode NGT-2 mic to record them, and Pro Tools for the editing. The wind and room tone came from a sound library.

I've started recording and editing sound for a game built in UDK, so in order to get up to speed on the implementation side of things, I've made a basic level which I can use like a sandbox environment. When I began putting sounds into the level I found it hard to get documentation on some of the finer points of this subject, so the aim of this blog is firstly to make a record of what I'm doing, and secondly to share the findings with other UDK sound beginners!