Preorder Estimated Availability Date. Your credit card will not be charged until the product is shipped. Estimated availability date is subject to change.Preorder Estimated Availability Date. Your credit card will not be charged until the product is ready to download. Estimated availability date is subject to change.

User level

Required products

Sample files

This article gives you a solid understanding of working with audio in Adobe Soundbooth CS4 and Adobe Flash CS4 Professional as well as tutorials and sample files to get you up and running. You'll cover the basic techniques to work with audio in ActionScript 3.

Adobe Flash has a long history of successfully deploying audio on the web. Flash 4 introduced support for the MP3 format, which opened up the door for using larger files with better performance. Flash 5 introduced the Sound object in ActionScript and the ability to control sound dynamically in runtime-based applications. Flash MX introduced the FLV format and MP3 metadata support for expanded options in synchronization and data management. Flash CS3 added new levels of audio support with ActionScript 3, capable of displaying sound spectrums and performing enhanced error handling.

Flash CS4 now adds support for the ASND format, which allows you to take a snapshot of the original audio so you can revert edits all the way back to the starting point. In addition, Flash CS4 FLA files (and FLV files) created with Adobe Media Encoder CS4 can now contain XMP metadata, which describes the audio related to the file along with other file information.

Flash audio primer

When you're developing audio for SWF applications, you need to be aware of the capabilities and requirements of audio in Adobe Flash Player. This section covers everything you need to know to get started.

Overview

The general workflow for audio production in Flash CS4 is the same as with other non-vector media. In most cases the audio asset will be acquired and edited externally from the application. When the audio file is prepared, it is then imported into the FLA file or loaded at runtime using ActionScript. This is the first big decision to be made in the process; do you embed the audio or use audio that is external to the Flash movie?

Using embedded audio allows you to import a range of audio formats and has the benefit of visual authoring in the Adobe Flash interface (you don't need to use any coding to implement it). Embedded audio also has the advantage of visual synchronization with graphic content. The disadvantages are the incurred larger file size to the SWF file and the lack of flexibility for changes and runtime manipulation.

Using external audio is generally the way to go for more complex projects. External audio has the advantage of remaining flexible for edits and dynamic play list driven content. It also has the advantage of excluding the audio's file size from the SWF file. The primary disadvantage is that it requires some ActionScript knowledge to implement it. Some tasks like voice synchronization with a character animation on the Timeline work best if the audio is embedded directly on the Timeline.

Workflows

The following list is divided into the four facets of production that should be considered: file preparation, working with embedded audio, working with external audio, and editing your audio.

General steps in the preparation workflow include the following:

Record or acquire an audio source in a high resolution format (AIFF or WAV format for example).

Save a copy of the raw source file as a backup.

Save the source file in the Adobe Sound Document (ASND) format to retain lossless access to your original file and edit history.

Decide whether to attach the sound to a timeline at authortime or dynamically work with the sound using the Sound object at runtime.

If attaching sound to a timeline, select a keyframe on the target timeline and attach the audio using the Sound properties in the Property inspector

If using ActionScript, activate the Export for ActionScript option on the sound in the Library, and then add ActionScript to the movie to manipulate the sound dynamically at runtime

General steps in the external audio workflow include the following:

Create a new file in Flash or open an existing FLA file.

Add ActionScript to the movie that loads the external audio file—either manually assign the audio file path in the ActionScript code or use an XML play list to dynamically assign file paths using an external file.

Manipulate and manage the audio file at runtime using ActionScript.

General steps in the editing workflow include the following:

Open the audio file in Soundbooth CS4.

If the file is embedded in a FLA file, select the file in the Library and choose the Edit in Soundbooth option from the content menu.

If the file is external to the FLA file, open it directly in Soundbooth.

Edit the file as desired using the range of editing tools in Soundbooth.

Save the source file.

If the audio file was embedded in a FLA file, it automatically updates when in the movie when saved in Soundbooth.

If the audio file was external to the FLA file, then export the updated source file in the format that the ActionScript code expects (MP3, FLV).

Before I go further into production techniques, let's take a look at the types of audio formats that Flash Player supports.

Flash Player capabilities

It is important to understand that Flash Player is designed to play audio in a few specific formats. By itself, Flash Player cannot record audio streams. However the player does have access to the microphone object on the end viewer's computer and can be combined with the Flash Media Server to record and store sound files on a server or stream the sound to other SWF application instances. The Flash Media Server greatly expands the possibilities of what can be done with Flash audio, but Flash Player by itself works well for streaming sounds for playback.

Bit rates and sample rates supported in Flash

The Flash Player supports 8-bit or 16-bit audio at sample rates of 5.512, 11.025, 22.05, or 44.1 kHz. The audio can be converted into lower sample rate during publishing, but it is recommended that you use an audio editing application to resample and edit the audio files—outside of Flash CS4. If you want to add effects to sounds in Flash, it's best to use 16-bit sounds. If you have limited RAM on your system, keep your sounds short and use 8-bit audio.

Tip: Your sound needs to be recorded in a format sampled at 44.1 kHz, or an even factor of 44.1, to assure proper playback in the SWF file. Working in an audio production tool built to produce "Flash-friendly" audio can save a lot of headaches dealing with subtle details. Soundbooth CS4 is a solution designed specifically for Adobe Flash audio production and integration with other Adobe tools.

Audio formats supported when embedding audio

Flash supports a range of audio formats for importing and embedding sounds in a SWF file. You'll need to have QuickTime 4 or higher installed on your computer to take full advantage of the supported formats during authoring (see Table 1).

Table 1. Supported audio formats when importing audio into Flash

Format

Type

Platform

QuickTime 4

AIFF

Audio Interchange File Format

Win/Mac

N/A

ASND

Adobe Soundbooth

Win/Mac

N/A

AU

Sun File Format

Win/Mac

Required

MOV

Sound Only QuickTime Movies

Win/Mac

Required

MP3

MPEG Layer 3

Win/Mac

N/A

SD2

Sound Designer 2

Mac

Required

WAV

Waveform Audio Format

Win/Mac

N/A

Note: The ASND format is a nondestructive audio file format native to Adobe Soundbooth. ASND files can contain audio data with effects, sound scores, and multitrack compositions that can be modified later.

When you embed the audio, Flash CS4 bundles the sound with the SWF file. In most cases the embedded sound will be compressed along with the rest of the assets in the file during publishing. So in the case of embedded audio, you also have to think about the exported audio format as well (see Table 2).

Notice that while QuickTime is required for importing the full range of supported audio formats, it is not needed to export them or play the published movie. Flash Player handles the playback of the four export formats. The default and most commonly used audio format is MP3. Also notice that all formats are supported on Mac and Windows regardless of whether the file required a Mac during the authoring process.

The export audio format can be set globally in the Publish Settings dialog box or set per sound file. To adjust audio settings globally, edit the event and streaming fields in the Publish Settings (File > Publish Settings). To adjust audio settings per sound, right-click the sound in the Library to launch the Sound Settings dialog box (see Figure 1).

Working with event audio and streaming audio

When you work with embedded audio that is attached to a timeline, you have to decide whether to handle each sound as "event" audio or "streaming" audio in Flash Player. This is a concept that specifies how the audio relates to its timeline (or not). Streaming audio signals Flash Player to synchronize the audio to the timeline to which it is attached, and to start playing the sound as it downloads. When you stream audio, you attach the sound to a timeline and the audio playback is directly synched to the length of that timeline. This approach is commonly used for synchronization with animated content and for streaming playback of larger content files.

Event audio signals Flash Player to handle the sound's playback without regard to its timeline. If its timeline contains a limited number of frames, such as a button, the sound can play from start to finish. Event audio has to download completely before it can play, and therefore is most commonly used for short sounds, such as button clicks.

Audio formats supported when playing external audio

Flash Player supports playback of external audio in MP3 and FLV format. The MP3 format has been a mainstay with Flash developers since the Flash 4 era whereas the FLV format became an option in Flash MX (6) when the Flash Media Server implemented the format for Flash video and audio streaming.

Tip: Flash Player 9,0,115 and later supports the HE-AAC audio codecs along with H.264 video.

The MP3 format is commonly used because it is familiar to developers from other areas of web production. MP3 formatted files are relatively easy to produce and easy to share with other web-based applications. While this may be the case, there are some advantages to working with the FLV format. FLV formatted files can hold metadata, such as cue points for synchronization. You can also manipulate FLV audio files using the FLVPlayback component, which allows you to load the sound and create a playback interface with little to no ActionScript knowledge. As audio editing tools such as Soundbooth now export source audio to FLV format, FLV has become a viable option for developers working outside of the Flash Media Server environment.

Table 3. Supported audio formats when playing external audio in Flash

Format

Type

Platform

QuickTime 4

FLV

Flash Video

Win/Mac

NA

MP3

MPEG Layer 3

Win/Mac

NA

Working with audio metadata

Flash Player 6 and later supports metadata for both the MP3 and FLV formats. MP3 files support the ID3 v1.0 and v1.1 standard. FLV files support FLV video metadata parameters, including cue points for content synchronization and custom parameter entries. In both cases the metadata can be retrieved at runtime using event handler functions in ActionScript.

Tip: Flash CS4 FLA files and FLV files created with Adobe Media Encoder CS4 can also contain XMP metadata, which describes the audio related to the file along with other file information. XMP metadata conforms to W3C standards and can be used by web-based search engines to return meaningful search results about the SWF file and its internal content.

A few words about audio in Adobe Flash Lite

Flash Lite is a runtime engine used to display SWF files on consumer electronics and mobile devices. This article focuses on implementing audio for playback in Flash Player; however it's interesting to note that Flash Lite supports the playback of device sounds such as MIDI and SMAF, among others. See Import and play a device sound in the Developing Flash Lite 2.x and 3.0 Applications online documentation for more information on working with Flash Lite and deploying audio in mobile devices.

Preparing audio in Soundbooth

Soundbooth CS4 is an audio editing tool that integrates with Flash CS4 and other software in Adobe Creative Suite 4. Soundbooth provides a range of editing and composition tools in a simple workflow that anyone can use. New to this version is the editable ASND format—the ability to create multitrack compositions—along with a handful of other upgrades.

This section focuses on the basic tasks you'll commonly work through when preparing audio for Flash in Soundbooth. For full documentation, see the Soundbooth CS4 online documentation.

Recording audio in Soundbooth

One of the most accessible and least expensive ways to obtain audio files is to record the sound yourself. The Soundbooth environment takes this into account and provides the means to record and clean up the artifacts commonly incurred when recording audio outside of a professional studio.

To prepare Soundbooth for audio recording, follow these steps:

Make sure you're working on a computer with a sound card and a microphone. Note that ASIO compatibility is preferred in Soundbooth but DirectX support will work and is most likely available on your sound card. (See the Readme file installed with Soundbooth for more information on system requirements.)

Notice that the dialog box lists all the input and output devices available on your computer. In the Enable Devices section, choose your microphone. Click the Output tab and choose your output device if needed. Click OK to close the dialog box.

Adjust the Preferences settings as desired. Click OK to close the Preferences dialog box.

To record audio, follow these steps:

In Soundbooth, choose File > Record to launch the Record dialog box (see Figure 3).

Figure 3. Soundbooth Record dialog box prior to recording audio

Choose your device and port. In my case, I'm using the default device as I have already configured it in the Preferences dialog box. AK5370 is the name of the port my microphone is on. You should see the levels indicator on the right light up when the port is correctly selected and the microphone is on. Click the Settings button to set your sound properties, if needed.

Choose a sample rate. It is a best practice to record at the default sample rate to create a high-quality WAV or AIFF master file. After recording, you can export the high-quality file to the MP3 format or FLV format, keeping a copy of the master source file for edits and changes.

Choose the number of channels for the recording: mono or stereo. In general you should use the mono settings to ensure a smaller file size. You can alternate playback of the mono signal on one or both channels at runtime.

Add a filename and select a folder location for the recording. Soundbooth will automatically save your audio clips in WAV format as you record them. They appear in the Files panel in the Soundbooth interface. Click the Browse button to select a specific destination folder.

Test your set up by speaking into your microphone and watching the level meter to see it light up. If you have speakers turned on in the room, uncheck the Monitor Input during Recording option. That way you won't hear an echo from the playback of the live recording. If you're using headphones to listen to the audio as it is recorded, then go ahead and check this option. If your recording levels are too high or too low, adjust the recording level on your sound card.

To follow along with the simple example in the sample files provided at the beginning of this article, try recording the numbers 1, 2, 3 as you count them out loud with a second or two pause between each number. You'll experiment with this file in different ways as we walk through the rest of the tutorial.

For more best practices on recording techniques, see the following resources:

Working with the scores feature

One of the unique features of Soundbooth is the ability to generate soundtracks by combining audio and video clips with prebuilt audio compositions called scores. Basically, you can create a soundtrack that adds your content on top of professional compositions. You can also use the score without additional content as a quick way to add background audio to your movie. Each score template provides a range of editable parameters that allow you to customize and export the score as a new audio file. You may also save customized score templates for reuse later.

For the purposes of this tutorial, I'll focus on the simple audio clip that you recorded in the previous section. To find more information on working with scores, see Customizing scores in the Using Soundbooth CS4 online documentation.

Tip: Soundbooth ships with two default scores available for immediate use. You can acquire more templates by launching the Resource Central panel from the Windows menu.

Working with prerecorded audio and video

In addition to recording your own audio and creating your own scores, you can open prerecorded audio and video sources to work with as well. The process is fairly easy. You open a file and edit it using the range of features in Soundbooth. Or you open a handful of files and copy and paste portions of them to collage together a multitrack sound file. You may also open files to use as reference clips while working with a score template.

To open an audio or video file, follow these steps:

Chose File > Open in the Soundbooth Menu. Browse for a file to open.

Notice that the file appears in the Files panel and can be managed in the workspace at that location.

Saving the source file in ASND format

New to Soundbooth CS4 is the ability to save an editable source file in ASND format. The ASND format allows you to take a snapshot of the original audio so you can revert edits all the way back to the starting point. You can save ASND files for future editing and embed them directly in Flash CS4 FLA files.

Before you make edits to your source audio, you should save the file in the ASND format:

Choose File > Save As.

In the Save As dialog box, choose the ASND format from the Type menu.

Click OK.

Clean up the audio source as needed

Soundbooth has some great clean up features built-in to help you create the best quality audio from an office environment. This is a fairly common situation for a web developer; you may be using professional equipment, but you're working in an office environment instead of an actual sound booth. In my studio I often deal with the hum of several large computers or an occasional dog bark that needs to be edited out of an otherwise great track.

To remove noise or clicks and pops from an audio file, follow these steps:

Select the portion of audio you want to clean up by dragging a selection along its waveform.

Choose the type of artifact that you wish to clean up (noise, clicks & pops, or rumble) by clicking on the appropriate button.

In the dialog box that appears, adjust settings as desired. In most cases it's easiest to experiment with the settings and compare the new sound to the original by clicking the Preview button in the dialog box. Click OK to apply the command.

To remove a sound from an audio file, follow these steps:

Select a portion of the sound to remove using the Time Selection, Marquee, or Lasso tool. Zoom in to see the area if needed.

Choose Edit > Auto Heal to remove the sound and blend the resulting areas for a smooth result.

Another editing best practice is to cut the high frequencies out of the signal before converting the file to a compressed format. Doing so will produce a compressed sound with less of the artifacts that are commonly heard in web audio.

To cut the high frequencies or hiss out of the audio before export, follow these steps:

Select the length of the audio by dragging a selection across the waveform.

In the Effects panel, choose the EQ: Reduce Harshness or Remove Hiss options from the Mono Rack Preset menu (see Figure 5). Notice that an EQ effect is automatically added to the effects list.

Figure 5. Preset list of enhancement and editing effects available in the Effects panel

Double-click the EQ: Parametric row to edit the specific parameters if desired.

Click the Apply to Selection button at the bottom of the Effects panel to apply the effect.

Edit the audio source as needed

I find that I run through a standard set of editing steps before moving on toward exporting the file to a compressed format. Usually that includes trimming the audio to the correct length, adding fades and effects, and normalizing the audio if the levels are too low.

To trim the audio file to a specific length, follow these steps:

You may either trim the current file or copy/paste a portion of it into a new file.

To trim the current file, grab the trim handle that appears on the left and right sides of the waveform view (see Figure 6).

Figure 6. Trimming a clip by moving the trim handles on the left or right side of the waveform

To apply an effect, such as the Voice Enhancer, follow these steps:

Select the length of the sound by dragging a selection through its waveform.

Choose Effects > Voice Enhancer from the menu.

In the Effects panel, choose the effect preset setting that you wish to apply. In my case I chose the Male setting to compensate for imperfections in that range of audio frequencies.

Click the Apply to Selection button at the bottom of the Effects panel to apply the effect.

To add a fade in or out of the sound, follow these steps:

To set the amount of fade in, click the Fade In button located just below the timeline above the left side of the waveform (see Figure 7).

Figure 7. Fade In button as it appears with a fade applied to the sound

Click and drag the button to extend the fade area. Move the cursor up and down while you drag to bend the shape of fade curve.

To set the amount of fade out, click the Fade Out button located above the right side of the waveform.

Click and drag the fade curve into the desired location and curve shape.

To normalize the recording and boost the volume level, follow these steps:

Select the length of the sound by dragging a selection through its waveform.

Choose Processes > Make Louder. Notice that the waveform increases in height.

If you need more specific control over how much the audio is normalized, then you can skip the Make Louder command and simply click and drag the selected waveform. Dragging upward will increase the signal and dragging down will decrease it.

Saving timing cues

One of the benefits of this new generation of audio software is the ability to easily generate and export timing markers to use for content synchronization.

To create time markers in your audio file, follow these steps:

Open the sound file you wish to update with time markers.

Select the time to add the first marker by using the waveform and the timeline to visually identify the desired cue point. Move the current time indicator line to the desired time and choose Edit > Markers > Set Flash Cue Point. Notice that a marker icon appears along the timeline.

Add markers as desired. In my example, I'm adding a marker before the each number in the count of one, two, three (see Figure 8).

To edit a marker, click the marker icon and edit the properties in the Markers panel. You can edit the name of the marker and adjust the time it occurs. You can also set the marker to correspond to an event or navigation cue point in an FLV file. Use the navigation option if you want to be able to seek exactly to that location in the file using ActionScript. The name and time are standard elements of FLV cue points, but you may also add as many name/value pairs (variables) as you like to each cue point.

Position your markers and edit the related information until you're happy with the layout. From here, you will export an XML file containing the information. The XML file can be combined with MP3 files or FLV files to produce synchronization effects using ActionScript.

To export the markers as an XML file, follow these steps:

Continue with the file from the last section or open a new file containing markers.

Choose File > Export > Markers to export the markers in XML format.

Tip: It's interesting to note that you can use the File > Import command to import a marker XML file. This feature allows you to create marker definitions for reuse across other media files or allows you to write markers in XML for output in FLV format. You can also import and export marker XML files in Adobe Media Encoder.

Exporting audio to MP3 or FLV format

Last stop on the audio production side of things is to export the compressed audio from the source audio. Usually you'll be exporting to MP3 or FLV format, although you can work with ASND, WAV, or AIFF if you plan on embedding the audio in your FLA file.

To export an MP3 audio file, follow these steps:

Choose File > Save As.

In the Save As dialog box, choose the MP3 format from the Type menu. Click OK.

Whether you export your source files in MP3 or FLV format, you should end up with two sets of audio files: your master files (used for editing) and your exported files (served from the website).

Note: If you encode the FLV using Adobe Media Encoder outside of Soundbooth, you can embed the cue points directly in the FLV format. F4V video does not support cue points in the same way as the FLV format.

Managing audio in a Flash project

One topic that's often overlooked in web development is the topic of file management. To maintain a fast and easy workflow while handling multiple projects, your best bet is to use a process for organizing your FLA files, assets files, and website files. In addition, the structure of your FLA files, ActionScript coding, and the interface to the web environment should be simple and organized.

This section focuses on file management approaches in handling assets in the file system, while editing in Soundbooth, and implementing functionality in Flash CS4.

Managing files in Soundbooth

The primary means of file management while working in Soundbooth are the File, Scores, and Resource Central panels. You'll work with these panels while performing routine tasks and selecting files.

Adobe Bridge CS4 is used for audio file management across Soundbooth and all CS4 applications. Bridge is a simple, stand-alone application installed with the CS4 products. Included in its feature set is the ability to browse files, add metadata to files, group related assets, and perform automated tasks and batch commands.

Click on the Folders tab to start browsing your file system for audio files and related assets. You can use Bridge for simple processes like browsing and reviewing audio files or you can assign metadata and keywords for future searches for related content.

Managing files in Flash CS4

Throughout this article the topic of embedded audio vs. external audio has come up. In regards to file management, you have to think about the question before you start setting up your file structure and approach. If you're embedding the audio, then your file management will take place in Flash CS4, where you'll use the FLA file's document Library to create an organized collection of movie clip symbols, sound assets, and library folders. If you're using external audio, then you'll manage the collection of files using XML play lists, an organized folder structure in an application folder, and the new Project panel in Flash CS4.

Embedded audio organization

Consider the organization approach best used if you're planning on embedding your audio. In this case, the Library panel and movie clip symbols become your environment for organization. Embedded audio has to be imported through the File menu in Flash CS4 (File > Import). As you import sounds, they appear in the Library panel. It's important to organize your Library using folders. Otherwise, the view can quickly become a long scrolling list of assets that is unwieldy and impractical to navigate. It is a best practice to use a pattern and choose naming conventions for your Library folders that repeats across projects. This makes it easier for other developers (and even yourself at a later date) to understand where assets are located and where to edit your project (see Figure 12).

Figure 12. Well-organized Library with descriptively named subfolders

In addition to creating an easy to navigate series of folders for the Library assets, you have to think about how you will handle the sounds next. If you use the Sound object in ActionScript, then you can create a single reusable component or script to handle the audio in the Library. If you attach the sounds to a timeline using the Property inspector, then you'll either be separating sounds in buttons symbols, separating sounds in movie clip symbols, or staggering sounds along a single movie clip's timeline.

In the first sample file, three audio files are attached to timelines as event sounds in three separate button symbols. The collection of sound buttons is stored in a "Sound Player" movie clip that encapsulates the controls into a single object on the main Timeline. This type of organization within the FLA file works well for portability and general ease of navigation.

Note: Even though a sound in the Library is not a symbol type, the same concepts apply for reuse. That means you can create as many instances of the sound as you like without adding more file size to the FLA file.

External audio organization

For dynamic applications using sound, the audio files will usually be located external to the SWF file. These types of applications require a different organizational approach. In general, the Flash file should read in a playlist to remain flexible and reusable. Now that you're dealing with separate external files, you should create an organized project folder containing all your assets separated in corresponding folders.

The Project panel is similar to the Dreamweaver Site window, in that it allows you view the file structure of the website while working on a document. Flash CS4 includes an update to the Project panel. You can add FLA files, ActionScript files, and folders, to a project. I use the Project panel as a shortcut to quickly open FLA files, ActionScript files, and text files directly from Flash CS4. I also find it useful to create a list of all the files and folders in my "project" folder for visual reference while thinking about or discussing the site (see Figure 13).

Figure 13. Using the Project panel to recreate the organization of your project's files and keep track of all external files and assets

A few words about XML and file management

XML is a commonly used format for configuring playlists in SWF applications. For example, let's say that you build an MP3 jukebox application that loads a series of MP3 files and plays them in a simple interface. You probably wouldn't want to rebuild the application every time you wanted to switch the list of MP3 files. And what if you wanted an English version and a Spanish version?

Instead of embedding the audio files, you'll use external MP3 files and set up the jukebox application to read an external playlist. This external playlist is simply an XML formatted text file, which can be easily changed without changing the jukebox application. Furthermore, you could supply the path to the playlist file in the HTML parameters that embed the SWF file on the web page; this approach allows you to switch from English to Spanish file references dynamically or by using a separate HTML page for each language version.

The XML format is simple to learn in the context of Flash development. An XML document is made up of open and close tags (<myTag></myTag>) similar to HTML tags. The names of the tags, called XML nodes, are user-defined and are intended to describe the data in-between the open and close tags. The node names and the nesting of nodes create a hierarchy of information that can be consumed as a string in any programming language.

A simple playlist document contains references to the audio file paths as well as track names and any other related information, as shown in the XML example below:

Tip: It's a best practice to include the folder path in the XML data in addition to the filename. That way you can easily change files without necessarily having to change the filenames; for example, you could change the folder name to route between multiple languages.

An XML file can also be used to store timing markers for synchronization of content while the audio is playing. One of the useful things about working with XML is that you can define its granularity as needed. For example, you could create a playlist that stores audio file paths and timing markers for each file—all in one playlist. Alternatively, you could create a simple playlist with file references and then create individual XML files containing timing markers per audio file. In this case you would load the synchronization details as needed. When deciding which route to take, the decision usually is based upon the amount of data being described and when the data is needed. If you can get everything in one file, then you can load that file as the application launches and instantly have access to all the information as the movie plays. If the amount of information is so great that you cannot load it all up front, then you can split the data into separate files and load each file as needed.

The general concept of timing markers (more commonly called cue points in ActionScript) is to provide time-based notifications to the SWF application's interface that something is happening. This strategy can be used for synchronizing animation, signaling to the interface that it's time to do something, or for displaying text captions that visually support the audio. Cue points generally have name and time properties associated with them. However, you can combine caption text or anything else that you need when creating the XML file. Remember, the goal is to create an external playlist that is easy to change so that you can update the movie without editing it. Identify all the information you need to update in your project and add the corresponding data in each XML node.

In the example shown below, the playlist has been updated to add cue points and captions to each audio file:

Implementing audio in a Flash user interface

Now that you understand the supported audio formats in Flash Player, have prepared files in an audio editing application, and have organized your project to achieve best results, you're ready to jump in and start implementing the audio. There are a number of approaches you can take that range from easy to intermediate in difficultly. This section lists a range of options available to you and supplies examples for each approach.

Attach embedded audio to the Timeline for animations and button sounds

The easiest strategy is also one of the oldest approaches for synchronizing sounds to graphic content in your FLA file. It involves attaching embedded audio to a keyframe on the Timeline using the Property inspector. The process is fairly easy and can be accomplished quickly without using code.

To embed an audio file and attach it to a button timeline, follow these steps:

Create a new ActionScript 3 FLA file in Flash CS4.

Choose File > Import > Import to Library to import your sound and place it in the Library.

Open the Library panel (Window > Library) and click the New Symbol button in the lower left of the panel.

In the Create New Symbol dialog box that appears, enter a name for the new button and choose the Button type. Click OK.

You're immediately presented with the editing area (timeline) of the button. Rename the default layer to skins and add graphics and text to the keyframe on frame 1 (the button's Up state).

Create a new layer and name it sound. Extend the timeline to frame 3 (the button's Down state) by selecting frame 3 across both layers and pressing F5 on the keyboard.

Select frame 3 of the sounds layer and insert a blank keyframe by pressing F7.

With the new keyframe selected, use the Sound drop-down menu in the Property inspector to select your sound. You should see a waveform appear attached to the target keyframe when all is said and done. Leave the Synch setting on "Event" as you do not want this button sound to be synchronized to the timeline. If you wish to edit the channel information related to the sound instance, you may do so by clicking on the Edit button in the Property inspector.

Return to the main Timeline and drag an instance of the button to the Stage.

Test the movie (Control > Test Movie) to test the button. You should hear the sound play when you click the button.

If you would rather have the sound play when the mouse rolls over the button, double-click the button instance to enter its timeline and move the audio keyframe from the Down to the Over state (frame 2).

If you are following along using the sample files provided on the first page of this article, open sample_1.fla and double-click the Button – One symbol in the Library to view its timeline (see Figure 14). You can experiment with this working example.

Figure 14. Timeline and Stage for the Button – One symbol as it appears in the sample_1.fla file

To embed an audio file and set it up for synchronized streaming with the timeline, follow these steps:

Choose File > Import > Import to Library to import your sound and place it in the Library.

Open the Library panel (Window > Library) and click the New Symbol button in the lower left of the panel.

In the Create New Symbol dialog box that appears, enter a name for the movie clip and choose the MovieClip type. Click OK.

You're immediately presented with the editing area (timeline) of the movie clip. Rename the default layer to animation and build a tween animation or a frame-by-frame animation along the layer.

Add a new layer and name it sound.

Click the keyframe on frame 1 of the sound layer and choose the sound in the Sound drop-down menu in the Property inspector. In this case, you'll set the sound Synch setting to Stream so that the sound is synchronized directly to the movie clip's timeline. You can move the keyframe containing the attached sound to any frame in the timeline as long as the timeline is long enough to play the entire sound.

After the sound is attached to the timeline, you can extend to timeline to any number of frames needed by clicking further along the timeline and inserting frames (F5). As you extend the timeline, you can see the waveform of the attached sound. In most cases you'll keep extending the timeline until you see the end of the sound file's waveform.

Place a stop action at the end of the timeline to keep it from looping and repeating the sound.

Return to the main Timeline and drag an instance of the movie clip to the Stage.

Test the movie (Control > Test Movie) to test the movie clip. You should hear the sound play as the animation occurs. The streaming sound will play as the file downloads—unlike event audio which has to download completely first.

Use the Sound object to play embedded and external audio

The introduction of ActionScript 1.0 in Flash 5 brought along with it the Sound object in ActionScript. The Sound object is an easy to use code feature that allows you to load a sound from the Library dynamically or load an MP3 file from an external location.

To load an embedded sound from the Library using a button symbol and ActionScript 3, follow these steps:

Create a new ActionScript 3 FLA file in Flash. Save it into the main folder of the sample files.

Choose File > Import > Import to Library to import your sound and place it in the Library.

The Sound properties dialog box appears. Select the Export for ActionScript option. Explicitly name the Class field sound1 for the sake of this example. This step makes the sound dynamically available to ActionScript at runtime.

Create a button symbol the same as you did before; only this time, don't attach the sound to the timeline inside the button. Just include text and graphics.

Drag an instance of the button to the main Timeline and name the instance play_btn in the Property inspector.

Rename the default layer to assets.

Create a new layer and name it actions. The actions layer will hold the code that responds to the button click and loads the embedded sound.

Click the keyframe on frame 1 of the actions layer and open the Actions panel. Write the following code in the text area of the Actions panel (you can copy and paste it if you prefer). In ActionScript 3, you use the "new" keyword along with the linkage Class name to create a new instance of the sound:

Test the movie (Control > Test Movie) to see the results. You should hear the sound play when the button is clicked.

To see a working example of this section, open up sample_2.fla file from the sample files folder. You can examine the project in detail by investigating the code and assets on each layer (see Figure 15).

Figure 15. Timeline, Library, and Stage as they appear in the sample_2.fla file

To load an MP3 file from an external location, follow these steps:

Create a new ActionScript 3 FLA file. Save it into the main folder of the supplied files.

Create a button symbol the same way you did in the previous section. Include text and graphics, but no embedded audio.

Drag and instance of the button to the main Timeline and name the instance play_btn in the Property inspector.

Rename the default layer to assets.

Create a new layer and name it actions. The actions layer will hold the code that responds to the button click and loads the external sound.

Click the keyframe on frame 1 of the actions layer and open the Actions panel. Write (or copy/paste) the following code in the Script window of the Actions panel. Note that this code expects there is an MP3 file in a relative path to the SWF file. You may use absolute or relative paths, but be aware the using absolute paths for cross-domain files will cause an error in the security sandbox.

Test the movie (Control > Test Movie) to see the results. You should hear the sound play when the button is clicked.

To see a working example of this section, open up sample_3.fla file from the sample files folder. Examine the project by reviewing the code as it is updated for three buttons and plays a different external sound as each button is clicked (see Figure 16).

Figure 16. Timeline, Library, and Stage as they appear in the sample_3.fla file

Use FLV audio to synchronize to the interface using cue points

There are two benefits to using FLV audio. First, FLV files can contain embedded cue point metadata that can easily be used for content synchronization without using XML data. Second, the FLV file can be loaded and manipulated using the FLVPlayback component with little to no coding.

Note: The FLV export feature in Soundbooth does not appear to embed cue points or totalTime metadata. You can use the cue point marker file saved from Soundbooth for synchronization as seen in the next example. For best results, encode your WAV source file to FLV using stand-alone Adobe Media Encoder utility.

To load an FLV audio file using the FLVPlayback component, follow these steps:

Create a new ActionScript 3 FLA file. Save it into the main folder of the supplied files.

Open the Components panel and drag an FLVPlayback component to the Stage on the main Timeline. Name the instance flvAudio in the Property inspector. With the component selected, edit the component parameters in the Component inspector (Window > Component Inspector).

In the Component inspector, click the source field, and then click the magnifying glass icon to launch the Content Path dialog box. Click the folder icon to the right of the field to browse to the supplied FLV file in the audio/flv/ folder of the sample files. You can set the skin parameter to choose one of the default skins or set the skin parameter to none to either forego controls or clear the default skins so you can piece together your own controls using the FLVPlayback custom user interface components. For more information on skinning the FLVPlayback components, read Skinning the ActionScript 3 FLVPlayback component.

Test movie (Control > Test Movie) to see the results. You should hear the sound play automatically if you left the autoPlay parameter set to the default value of true.

To synchronize content to an FLV audio file using embedded cue point metadata, follow these steps:

Continuing to work in the same file, rename the default layer to assets.

Add a new layer and name it content.

Add a third layer at the top of the stack named actions.

Click the keyframe on frame 1 of the actions layer and add the following code in the Actions panel:

Tip: This code loads the markers file, converts the XML into ActionScript cue points, and listens to the cue point event from the video component to change the currently viewed frame and content.

Replace the path to the XML marker file in the last line of code if your XML file's name is different than the supplied sample.

Test movie (Control > Test Movie) to see what happens. You should see the Output window trace out the cue point names and time; in the case of the sample file movie, the cue point names are one, two, and three.

Close the SWF file and return to the FLA file. At this point you have a timing hook using the event handler function you added in the previous step. Now you know when the cue point occurs and you know its name. The key to synchronization is to use frame labels along the Timeline whose names match that of the cue points. When the event handler catches a particular cue point, you can tell the Timeline to go to and display the content at the frame label of the matching name.

Extend the Timeline to frame 40 by selecting the Timeline at frame 40 across all layers and inserting frames (F5).

Add a keyframe to the actions layer at frame 10 and add a frame label that matches the first cue point name. Add the frame label by selecting the keyframe on the Timeline and entering a name in the frame label field of the Property inspector.

Add a keyframe to the content layer at frame 10 and add the content that's related to the first cue point in the audio. For my example, I'll add some simple text to display along with the audio as a caption.

Repeat the previous two steps, creating a frame label with related caption text content for each cue point.

Update the top portion of the code located on frame 1 of the actions layer to look like this:

Test the movie (Control > Test Movie) to see the results. You should hear the audio play while text appears on the screen in synchronization.

Tip: If you're using Navigation or Event cue points already embedded in the FLV file, simply skip the XML loading parts of the code sample and assign the handleCuePoint event handler to the flvAudio instance.

To see a working example of this section, open up sample_4.fla file from the sample files folder. Examine the project by reviewing the code as it is updated for three buttons and plays a different external sound as each button is clicked (see Figure 17).

Figure 17. Timeline, Library, and Stage as they appear in the sample_4.fla file

Use XML files for playlists and synchronization with MP3 audio

While the use of FLV files and cue point metadata can be a powerful way to develop synchronized media, it may not always be an option if the audio files have to be usable outside of a SWF environment. In these cases, you can use MP3 files, XML playlists, and cue point lists to create synchronization between audio and other content.

To load an XML playlist file into ActionScript, follow these steps:

Create a new ActionScript 3 FLA file. Save it to the main directory of the sample files.

Rename the default layer to actions.

Click the keyframe on frame 1 and open the Actions panel to add some code.

Assuming that your XML playlist file is at the relative location settings/audio_playlist.xml, add the following code:

Test the movie (Control > Test Movie) to see the results. You should see the trace actions fire to reveal the XML string. Notice that I'm using node names in the XML playlist that match the properties of the List component: label and data (see the code below). We'll load the playlist into a List component to allow users to select from multiple sounds.

Tip: Working with events and event handler functions is the key to handling interactivity in ActionScript 3. Explore the Flash CS4 Help pages (F1) for more information on using audio events to work with metadata and error handling.

Test the movie (Control > Test Movie) to see the results. You should see the labels from the XML file appearing as labels in the ComboBox component. If you select an item in the list, you should hear the audio play.

To get a working example of this section, open up sample_5.fla file from the sample files folder (see Figure 18).

Figure 18. Timeline, Library, and Stage as they appear in the sample_5.fla file

Use the SoundChannel object to stop sounds and respond to the end of sounds

ActionScript 3 introduced the SoundChannel object, which allows you to stop a sound and respond to its completion without closing the stream. This is a handy trick to know when you want to load a sound once and play it multiple times without streaming it down each time.

To play a sound and control it with a SoundChannel object, follow these steps:

Create a new ActionScript 3 FLA file. Save it to the main directory of the sample files.

Rename the default layer buttons.

Create two button symbols. While frame 1 of the buttons layer is selected, place one instance of each button on the Stage.

Name one of the button instances play_btn and name the other stop_btn.

Add a new layer to the Timeline and name it actions.

Click the keyframe on frame 1 and open the Actions panel to add the following code:

Test the movie (Control > Test Movie) to see the results. You should hear the sound play when you click the play button. The sound should stop playing when you click the stop button. Also notice that when you use the SoundChannel object in this way, it eliminates the overlap of two or more sounds during playback.

See sample_6.fla in the sample files for the working example of this section.

Where to go from here

ActionScript 3 provides a number of options for sound control beyond the basics discussed here. If you want to take your research further, the next steps will be to look deeper into the SoundChannel object and examine how to control volume, panning, and sound transformations.