This version of the specification is obsolete and has been replaced by the document at http://webaudio.github.io/web-audio-api/.
Do not attempt to implement this version of the specification.
Do not refer to this version except as a historical artifact.

Abstract

This specification describes a high-level JavaScript API for processing and
synthesizing audio in web applications. The primary paradigm is of an audio
routing graph, where a number of AudioNode objects are connected
together to define the overall audio rendering. The actual processing will
primarily take place in the underlying implementation (typically optimized
Assembly / C / C++ code), but direct
JavaScript processing and synthesis is also supported.

The introductory section covers the motivation
behind this specification.

This API is designed to be used in conjunction with other APIs and elements
on the web platform, notably: XMLHttpRequest
(using the responseType and response attributes). For
games and interactive applications, it is anticipated to be used with the
canvas 2D and WebGL 3D graphics APIs.

Status of this Document

This section describes the status of this document at the time of its
publication. Other documents may supersede this document. A list of current W3C
publications and the latest revision of this technical report can be found in
the W3C technical reports index at
http://www.w3.org/TR/.

This is the Editor's Draft of the Web Audio API
specification. It has been produced by the W3C Audio Working Group , which
is part of the W3C WebApps Activity.

Please send comments about this document to <public-audio@w3.org> (public archives of
the W3C audio mailing list). Web content and browser developers are encouraged
to review this draft.

Publication as a Working Draft does not imply endorsement by the W3C
Membership. This is a draft document and may be updated, replaced or obsoleted
by other documents at any time. It is inappropriate to cite this document as
other than work in progress.

1. Introduction

This section is informative.

Audio on the web has been fairly primitive up to this point and until very
recently has had to be delivered through plugins such as Flash and QuickTime.
The introduction of the audio element in HTML5 is very important,
allowing for basic streaming audio playback. But, it is not powerful enough to
handle more complex audio applications. For sophisticated web-based games or
interactive applications, another solution is required. It is a goal of this
specification to include the capabilities found in modern game audio engines as
well as some of the mixing, processing, and filtering tasks that are found in
modern desktop audio production applications.

The APIs have been designed with a wide variety of use cases in mind. Ideally, it should
be able to support any use case which could reasonably be implemented
with an optimized C++ engine controlled via JavaScript and run in a browser.
That said, modern desktop audio software can have very advanced capabilities,
some of which would be difficult or impossible to build with this system.
Apple's Logic Audio is one such application which has support for external MIDI
controllers, arbitrary plugin audio effects and synthesizers, highly optimized
direct-to-disk audio file reading/writing, tightly integrated time-stretching,
and so on. Nevertheless, the proposed system will be quite capable of
supporting a large range of reasonably complex games and interactive
applications, including musical ones. And it can be a very good complement to
the more advanced graphics features offered by WebGL. The API has been designed
so that more advanced capabilities can be added at a later time.

Efficient biquad filters for lowpass, highpass, and other common filters.

A Waveshaping effect for distortion and other non-linear effects

Oscillators

1.2. Modular Routing

Modular routing allows arbitrary connections between different AudioNode objects. Each node can
have inputs and/or outputs. A source node has no inputs
and a single output. A destination node has
one input and no outputs, the most common example being AudioDestinationNode the final destination to the audio
hardware. Other nodes such as filters can be placed between the source and destination nodes.
The developer doesn't have to worry about low-level stream format details
when two objects are connected together; the right
thing just happens. For example, if a mono audio stream is connected to a
stereo input it should just mix to left and right channels appropriately.

1.3. API Overview

An AudioNode interface,
which represents audio sources, audio outputs, and intermediate processing
modules. AudioNodes can be dynamically connected together in a modular fashion. AudioNodes
exist in the context of an AudioContext

An AudioDestinationNode interface, an
AudioNode subclass representing the final destination for all rendered
audio.

An AudioBuffer
interface, for working with memory-resident audio assets. These can
represent one-shot sounds, or longer audio clips.

A user agent is considered to be a conforming implementation if it
satisfies all of the MUST-, REQUIRED- and SHALL-level criteria in this specification that
apply to implementations.

4. The Audio API

4.1. The AudioContext Interface

This interface represents a set of AudioNode objects and their
connections. It allows for arbitrary routing of signals to the AudioDestinationNode
(what the user ultimately hears). Nodes are created from the context and are
then connected together. In most use
cases, only a single AudioContext is used per document.

4.1.1. Attributes

destination

An AudioDestinationNode
with a single input representing the final destination for all audio.
Usually this will represent the actual audio hardware.
All AudioNodes actively rendering
audio will directly or indirectly connect to destination.

sampleRate

The sample rate (in sample-frames per second) at which the
AudioContext handles audio. It is assumed that all AudioNodes in the
context run at this rate. In making this assumption, sample-rate
converters or "varispeed" processors are not supported in real-time
processing.

currentTime

This is a time in seconds which starts at zero when the context is
created and increases in real-time. All scheduled times are relative to
it. This is not a "transport" time which can be started, paused, and
re-positioned. It is always moving forward. A GarageBand-like timeline
transport system can be very easily built on top of this (in JavaScript).
This time corresponds to an ever-increasing hardware timestamp.

4.1.2. Methods and Parameters

The createBuffer method

Creates an AudioBuffer of the given size. The audio data in the
buffer will be zero-initialized (silent). An NOT_SUPPORTED_ERR exception will be thrown if
the numberOfChannels or sampleRate are out-of-bounds,
or if length is 0.

The numberOfChannels parameter
determines how many channels the buffer will have. An implementation must support at least 32 channels.

The length parameter determines the size of
the buffer in sample-frames.

The sampleRate parameter describes
the sample-rate of the linear PCM audio data in the buffer in
sample-frames per second. An implementation must support sample-rates in at least the range 22050 to 96000.

The decodeAudioData method

Asynchronously decodes the audio file data contained in the
ArrayBuffer. The ArrayBuffer can, for example, be loaded from an XMLHttpRequest's
response attribute after setting the responseType to "arraybuffer".
Audio file data can be in any of the
formats supported by the audio element.

audioData is an ArrayBuffer containing
audio file data.

successCallback is a callback
function which will be invoked when the decoding is finished. The single
argument to this callback is an AudioBuffer representing the decoded PCM
audio data.

errorCallback is a callback function
which will be invoked if there is an error decoding the audio file
data.

The following steps must be performed:

Temporarily neuter the audioData ArrayBuffer in such a way that JavaScript code may not
access or modify the data.

Queue a decoding operation to be performed on another thread.

The decoding thread will attempt to decode the encoded audioData into linear PCM.
If a decoding error is encountered due to the audio format not being recognized or supported, or
because of corrupted/unexpected/inconsistent data then the audioData neutered state
will be restored to normal and the errorCallback will be
scheduled to run on the main thread's event loop and these steps will be terminated.

The decoding thread will take the result, representing the decoded linear PCM audio data,
and resample it to the sample-rate of the AudioContext if it is different from the sample-rate
of audioData. The final result (after possibly sample-rate converting) will be stored
in an AudioBuffer.

The audioData neutered state will be restored to normal

The successCallback function will be scheduled to run on the main thread's event loop
given the AudioBuffer from step (4) as an argument.

Creates a MediaElementAudioSourceNode given an HTMLMediaElement.
As a consequence of calling this method, audio playback from the HTMLMediaElement will be re-routed
into the processing graph of the AudioContext.

The createMediaStreamSource
method

Creates a MediaStreamAudioSourceNode given a MediaStream.
As a consequence of calling this method, audio playback from the MediaStream will be re-routed
into the processing graph of the AudioContext.

Creates a ScriptProcessorNode for
direct audio processing using JavaScript. An INDEX_SIZE_ERR exception MUST be thrown if bufferSize or numberOfInputChannels or numberOfOutputChannels
are outside the valid range.

The bufferSize parameter determines the
buffer size in units of sample-frames. If it's not passed in, or if the
value is 0, then the implementation will choose the best buffer size for
the given environment, which will be constant power of 2 throughout the lifetime
of the node. Otherwise if the author explicitly specifies the bufferSize,
it must be one of the following values: 256, 512, 1024, 2048, 4096, 8192,
16384. This value controls how
frequently the audioprocess event is dispatched and
how many sample-frames need to be processed each call. Lower values for
bufferSize will result in a lower (better) latency. Higher values will be necessary to
avoid audio breakup and glitches.
It is recommended for authors to not specify this buffer size and allow
the implementation to pick a good buffer size to balance between latency
and audio quality.

The numberOfInputChannels parameter (defaults to 2) and
determines the number of channels for this node's input. Values of up to 32 must be supported.

The numberOfOutputChannels parameter (defaults to 2) and
determines the number of channels for this node's output. Values of up to 32 must be supported.

It is invalid for both numberOfInputChannels and
numberOfOutputChannels to be zero.

Creates a DelayNode
representing a variable delay line. The initial default delay time will
be 0 seconds.

The maxDelayTime parameter is
optional and specifies the maximum delay time in seconds allowed for the delay line. If specified, this value MUST be
greater than zero and less than three minutes or a NOT_SUPPORTED_ERR exception will be thrown.

The createBiquadFilter
method

Creates a BiquadFilterNode
representing a second order filter which can be configured as one of
several common filter types.

Creates a PeriodicWave representing a waveform containing arbitrary harmonic content.
The real and imag parameters must be of type Float32Array of equal
lengths greater than zero and less than or equal to 4096 or an exception will be thrown.
These parameters specify the Fourier coefficients of a
Fourier series representing the partials of a periodic waveform.
The created PeriodicWave will be used with an OscillatorNode
and will represent a normalized time-domain waveform having maximum absolute peak value of 1.
Another way of saying this is that the generated waveform of an OscillatorNode
will have maximum peak value at 0dBFS. Conveniently, this corresponds to the full-range of the signal values used by the Web Audio API.
Because the PeriodicWave will be normalized on creation, the real and imag parameters
represent relative values.

The real parameter represents an array of cosine terms (traditionally the A terms).
In audio terminology, the first element (index 0) is the DC-offset of the periodic waveform and is usually set to zero.
The second element (index 1) represents the fundamental frequency. The third element represents the first overtone, and so on.

The imag parameter represents an array of sine terms (traditionally the B terms).
The first element (index 0) should be set to zero (and will be ignored) since this term does not exist in the Fourier series.
The second element (index 1) represents the fundamental frequency. The third element represents the first overtone, and so on.

4.1.3. Lifetime

This section is informative.

Once created, an AudioContext will continue to play sound until it has no more sound to play, or
the page goes away.

4.1b. The OfflineAudioContext Interface

OfflineAudioContext is a particular type of AudioContext for rendering/mixing-down (potentially) faster than real-time.
It does not render to the audio hardware, but instead renders as quickly as possible, calling a completion event handler
with the result provided as an AudioBuffer.

4.1b.1. Attributes

4.1b.2. Methods and Parameters

The startRendering
method

Given the current connections and scheduled changes, starts rendering audio. The
oncomplete handler will be called once the rendering has finished.
This method must only be called one time or an exception will be thrown.

4.1c. The OfflineAudioCompletionEvent Interface

4.1c.1. Attributes

renderedBuffer

An AudioBuffer containing the rendered audio data once an OfflineAudioContext has finished rendering.
It will have a number of channels equal to the numberOfChannels parameter
of the OfflineAudioContext constructor.

4.2. The AudioNode Interface

AudioNodes are the building blocks of an AudioContext. This interface
represents audio sources, the audio destination, and intermediate processing
modules. These modules can be connected together to form processing graphs for rendering audio to the
audio hardware. Each node can have inputs and/or outputs.
A source node has no inputs
and a single output. An AudioDestinationNode has
one input and no outputs and represents the final destination to the audio
hardware. Most processing nodes such as filters will have one input and one
output. Each type of AudioNode differs in the details of how it processes or synthesizes audio. But, in general, AudioNodes
will process its inputs (if it has any), and generate audio for its outputs (if it has any).

Each output has one or more channels. The exact number of channels depends on the details of the specific AudioNode.

An output may connect to one or more AudioNode inputs, thus fan-out is supported. An input initially has no connections,
but may be connected from one
or more AudioNode outputs, thus fan-in is supported. When the connect() method is called to connect
an output of an AudioNode to an input of an AudioNode, we call that a connection to the input.

Each AudioNode input has a specific number of channels at any given time. This number can change depending on the connection(s)
made to the input. If the input has no connections then it has one channel which is silent.

For performance reasons, practical implementations will need to use block processing, with each AudioNode processing a
fixed number of sample-frames of size block-size. In order to get uniform behavior across implementations, we will define this
value explicitly. block-size is defined to be 128 sample-frames which corresponds to roughly 3ms at a sample-rate of 44.1KHz.

AudioNodes are EventTargets, as described in DOM[DOM]. This means that it is possible to dispatch events to AudioNodes the same
way that other EventTargets accept events.

4.2.1. Attributes

The number of inputs feeding into the AudioNode. For source nodes,
this will be 0.

numberOfOutputs

The number of outputs coming out of the AudioNode. This will be 0
for an AudioDestinationNode.

channelCount

The number of channels used when up-mixing and down-mixing connections to any inputs to the node. The default value is 2
except for specific nodes where its value is specially determined.
This attribute has no effect for nodes with no inputs.
If this value is set to zero, the implementation MUST raise the
NOT_SUPPORTED_ERR exception.

4.2.2. Methods and Parameters

The output parameter is an index
describing which output of the AudioNode from which to connect. An
out-of-bound value throws an exception.

The input parameter is an index describing
which input of the destination AudioNode to connect to. An out-of-bound
value throws an exception.

It is possible to connect an AudioNode output to more than one input
with multiple calls to connect(). Thus, "fan-out" is supported.

It is possible to connect an AudioNode to another AudioNode which creates a cycle.
In other words, an AudioNode may connect to another AudioNode, which in turn connects back
to the first AudioNode. This is allowed only if there is at least one
DelayNode in the cycle or an exception will
be thrown.

There can only be one connection between a given output of one specific node and a given input of another specific node.
Multiple connections with the same termini are ignored. For example:

nodeA.connect(nodeB);
nodeA.connect(nodeB);
will have the same effect as
nodeA.connect(nodeB);

The connect to AudioParam method

Connects the AudioNode to an AudioParam, controlling the parameter
value with an audio-rate signal.

The destination parameter is the
AudioParam to connect to.

The output parameter is an index
describing which output of the AudioNode from which to connect. An
out-of-bound value throws an exception.

It is possible to connect an AudioNode output to more than one AudioParam
with multiple calls to connect(). Thus, "fan-out" is supported.

It is possible to connect more than one AudioNode output to a single AudioParam
with multiple calls to connect(). Thus, "fan-in" is supported.

An AudioParam will take the rendered audio data from any AudioNode output connected to it and convert it to mono by down-mixing if it is not
already mono, then mix it together with other such outputs and finally will mix with the intrinsic
parameter value (the value the AudioParam would normally have without any audio connections), including any timeline changes
scheduled for the parameter.

There can only be one connection between a given output of one specific node and a specific AudioParam.
Multiple connections with the same termini are ignored. For example:

nodeA.connect(param);
nodeA.connect(param);
will have the same effect as
nodeA.connect(param);

The disconnect method

Disconnects an AudioNode's output.

The output parameter is an index
describing which output of the AudioNode to disconnect. An out-of-bound
value throws an exception.

4.2.3. Lifetime

This section is informative.

An implementation may choose any method to avoid unnecessary resource usage and unbounded memory growth of unused/finished
nodes. The following is a description to help guide the general expectation of how node lifetime would be managed.

An AudioNode will live as long as there are any references to it. There are several types of references:

A playing reference for both AudioBufferSourceNodes and OscillatorNodes.
These nodes maintain a playing
reference to themselves while they are currently playing.

A connection reference which occurs if another AudioNode is connected to it.

A tail-time reference which an AudioNode maintains on itself as long as it has
any internal processing state which has not yet been emitted. For example, a ConvolverNode has
a tail which continues to play even after receiving silent input (think about clapping your hands in a large concert
hall and continuing to hear the sound reverberate throughout the hall). Some AudioNodes have this
property. Please see details for specific nodes.

Any AudioNodes which are connected in a cycle and are directly or indirectly connected to the
AudioDestinationNode of the AudioContext will stay alive as long as the AudioContext is alive.

When an AudioNode has no references it will be deleted. But before it is deleted, it will disconnect itself
from any other AudioNodes which it is connected to. In this way it releases all connection references (3) it has to other nodes.

Regardless of any of the above references, it can be assumed that the AudioNode will be deleted when its AudioContext is deleted.

4.4. The AudioDestinationNode Interface

This is an AudioNode
representing the final audio destination and is what the user will ultimately
hear. It can often be considered as an audio output device which is connected to
speakers. All rendered audio to be heard will be routed to this node, a
"terminal" node in the AudioContext's routing graph. There is only a single
AudioDestinationNode per AudioContext, provided through the
destination attribute of AudioContext.

4.4.1. Attributes

maxChannelCount

The maximum number of channels that the channelCount attribute can be set to.
An AudioDestinationNode representing the audio hardware end-point (the normal case) can potentially output more than
2 channels of audio if the audio hardware is multi-channel. maxChannelCount is the maximum number of channels that
this hardware is capable of supporting. If this value is 0, then this indicates that channelCount may not be
changed. This will be the case for an AudioDestinationNode in an OfflineAudioContext and also for
basic implementations with hardware support for stereo output only.

channelCount defaults to 2 for a destination in a normal AudioContext, and may be set to any non-zero value less than or equal
to maxChannelCount. An exception will be thrown if this value is not within the valid range. Giving a concrete example, if
the audio hardware supports 8-channel output, then we may set numberOfChannels to 8, and render 8-channels of output.

For an AudioDestinationNode in an OfflineAudioContext, the channelCount is determined when the offline context is created and this value
may not be changed.

4.5. The AudioParam Interface

AudioParam controls an individual aspect of an AudioNode's functioning, such as
volume. The parameter can be set immediately to a particular value using the
value attribute. Or, value changes can be scheduled to happen at
very precise times (in the coordinate system of AudioContext.currentTime), for envelopes, volume fades, LFOs, filter sweeps, grain
windows, etc. In this way, arbitrary timeline-based automation curves can be
set on any AudioParam. Additionally, audio signals from the outputs of AudioNodes can be connected
to an AudioParam, summing with the intrinsic parameter value.

Some synthesis and processing AudioNodes have AudioParams as attributes whose values must
be taken into account on a per-audio-sample basis.
For other AudioParams, sample-accuracy is not important and the value changes can be sampled more coarsely.
Each individual AudioParam will specify that it is either an a-rate parameter
which means that its values must be taken into account on a per-audio-sample basis, or it is a k-rate parameter.

Implementations must use block processing, with each AudioNode
processing 128 sample-frames in each block.

For each 128 sample-frame block, the value of a k-rate parameter must
be sampled at the time of the very first sample-frame, and that value must be
used for the entire block. a-rate parameters must be sampled for each
sample-frame of the block.

4.5.1. Attributes

value

The parameter's floating-point value. This attribute is initialized to the
defaultValue. If value is set during a time when there are any automation events scheduled then
it will be ignored and no exception will be thrown.

defaultValue

Initial value for the value attribute

4.5.2. Methods and Parameters

An AudioParam maintains a time-ordered event list which is initially empty. The times are in
the time coordinate system of AudioContext.currentTime. The events define a mapping from time to value. The following methods
can change the event list by adding a new event into the list of a type specific to the method. Each event
has a time associated with it, and the events will always be kept in time-order in the list. These
methods will be called automation methods:

setValueAtTime() - SetValue

linearRampToValueAtTime() - LinearRampToValue

exponentialRampToValueAtTime() - ExponentialRampToValue

setTargetAtTime() - SetTarget

setValueCurveAtTime() - SetValueCurve

The following rules will apply when calling these methods:

If one of these events is added at a time where there is already an event of the exact same type, then the new event will replace the old
one.

If one of these events is added at a time where there is already one or more events of a different type, then it will be
placed in the list after them, but before events whose times are after the event.

If setValueCurveAtTime() is called for time T and duration D and there are any events having a time greater than T, but less than
T + D, then an exception will be thrown. In other words, it's not ok to schedule a value curve during a time period containing other events.

Similarly an exception will be thrown if any automation method is called at a time which is inside of the time interval
of a SetValueCurve event at time T and duration D.

The setValueAtTime method

Schedules a parameter value change at the given time.

The value parameter is the value the
parameter will change to at the given time.

The startTime parameter is the time in the same time coordinate system as AudioContext.currentTime.

If there are no more events after this SetValue event, then for t >= startTime, v(t) = value. In other words, the value will remain constant.

If the next event (having time T1) after this SetValue event is not of type LinearRampToValue or ExponentialRampToValue,
then, for t: startTime <= t < T1, v(t) = value.
In other words, the value will remain constant during this time interval, allowing the creation of "step" functions.

If the next event after this SetValue event is of type LinearRampToValue or ExponentialRampToValue then please
see details below.

The linearRampToValueAtTime
method

Schedules a linear continuous change in parameter value from the
previous scheduled parameter value to the given value.

The value parameter is the value the
parameter will linearly ramp to at the given time.

The endTime parameter is the time in the same time coordinate system as AudioContext.currentTime.

The value during the time interval T0 <= t < T1 (where T0 is the time of the previous event and T1 is the endTime parameter passed into this method)
will be calculated as:

v(t) = V0 + (V1 - V0) * ((t - T0) / (T1 - T0))

Where V0 is the value at the time T0 and V1 is the value parameter passed into this method.

If there are no more events after this LinearRampToValue event then for t >= T1, v(t) = V1

The
exponentialRampToValueAtTime method

Schedules an exponential continuous change in parameter value from
the previous scheduled parameter value to the given value. Parameters
representing filter frequencies and playback rate are best changed
exponentially because of the way humans perceive sound.

The value parameter is the value the
parameter will exponentially ramp to at the given time. An exception will be thrown if this value is less than
or equal to 0, or if the value at the time of the previous event is less than or equal to 0.

The endTime parameter is the time in the same time coordinate system as AudioContext.currentTime.

The value during the time interval T0 <= t < T1 (where T0 is the time of the previous event and T1 is the endTime parameter passed into this method)
will be calculated as:

v(t) = V0 * (V1 / V0) ^ ((t - T0) / (T1 - T0))

Where V0 is the value at the time T0 and V1 is the value parameter passed into this method.

If there are no more events after this ExponentialRampToValue event then for t >= T1, v(t) = V1

The setTargetAtTime
method

Start exponentially approaching the target value at the given time
with a rate having the given time constant. Among other uses, this is
useful for implementing the "decay" and "release" portions of an ADSR
envelope. Please note that the parameter value does not immediately
change to the target value at the given time, but instead gradually
changes to the target value.

The target parameter is the value
the parameter will start changing to at the given time.

The startTime parameter is the time in the same time coordinate system as AudioContext.currentTime.

The timeConstant parameter is the
time-constant value of first-order filter (exponential) approach to the
target value. The larger this value is, the slower the transition will
be.

More precisely, timeConstant is the time it takes a first-order linear continuous time-invariant system
to reach the value 1 - 1/e (around 63.2%) given a step input response (transition from 0 to 1 value).

During the time interval: T0 <= t < T1, where T0 is the startTime parameter and T1 represents the time of the event following this
event (or infinity if there are no following events):

v(t) = V1 + (V0 - V1) * exp(-(t - T0) / timeConstant)

Where V0 is the initial value (the .value attribute) at T0 (the startTime parameter) and V1 is equal to the target
parameter.

The setValueCurveAtTime
method

Sets an array of arbitrary parameter values starting at the given
time for the given duration. The number of values will be scaled to fit
into the desired duration.

The values parameter is a Float32Array
representing a parameter value curve. These values will apply starting at
the given time and lasting for the given duration.

The startTime parameter is the time in the same time coordinate system as AudioContext.currentTime.

The duration parameter is the
amount of time in seconds (after the time parameter) where values will be calculated according to the values parameter..

During the time interval: startTime <= t < startTime + duration, values will be calculated:

After the end of the curve time interval (t >= startTime + duration), the value will remain constant at the final curve value,
until there is another automation event (if any).

The cancelScheduledValues
method

Cancels all scheduled parameter changes with times greater than or
equal to startTime.

The startTime parameter is the starting
time at and after which any previously scheduled parameter changes will
be cancelled. It is a time in the same time coordinate system as AudioContext.currentTime.

4.5.3. Computation of Value

computedValue is the final value controlling the audio DSP and is computed by the audio rendering thread during each rendering time quantum.
It must be internally computed as follows:

An intrinsic parameter value will be calculated at each time, which is either the value set directly to the value attribute,
or, if there are any scheduled parameter changes (automation events) with times before or at this time,
the value as calculated from these events. If the value attribute
is set after any automation events have been scheduled, then these events will be removed. When read, the value attribute
always returns the intrinsic value for the current time. If automation events are removed from a given time range, then the
intrinsic value will remain unchanged and stay at its previous value until either the value attribute is directly set, or automation events are added
for the time range.

An AudioParam will take the rendered audio data from any AudioNode output connected to it and convert it to mono by down-mixing if it is not
already mono, then mix it together with other such outputs. If there are no AudioNodes connected to it, then this value is 0, having no
effect on the computedValue.

The computedValue is the sum of the intrinsic value and the value calculated from (2).

4.7. The GainNode Interface

Changing the gain of an audio signal is a fundamental operation in audio
applications. The GainNode is one of the building blocks for creating mixers.
This interface is an AudioNode with a single input and single
output:

It multiplies the input audio signal by the (possibly time-varying) gain attribute, copying the result to the output.
By default, it will take the input and pass it through to the output unchanged, which represents a constant gain change
of 1.

As with other AudioParams, the gain parameter represents a mapping from time
(in the coordinate system of AudioContext.currentTime) to floating-point value.
Every PCM audio sample in the input is multiplied by the gain parameter's value for the specific time
corresponding to that audio sample. This multiplied value represents the PCM audio sample for the output.

The number of channels of the output will always equal the number of channels of the input, with each channel
of the input being multiplied by the gain values and being copied into the corresponding channel
of the output.

The implementation must make
gain changes to the audio stream smoothly, without introducing noticeable
clicks or glitches. This process is called "de-zippering".

4.7.1. Attributes

gain

Represents the amount of gain to apply. Its
default value is 1 (no gain change). The nominal minValue is 0, but may be
set negative for phase inversion. The nominal maxValue is 1, but higher values are allowed (no
exception thrown).This parameter is a-rate

4.8. The DelayNode Interface

A delay-line is a fundamental building block in audio applications. This
interface is an AudioNode with a single input and single output:

The number of channels of the output always equals the number of channels of the input.

It delays the incoming audio signal by a certain amount. Specifically, at
each time t, input signal input(t), delay time
delayTime(t) and output signal output(t), the output will be
output(t) = input(t - delayTime(t)). The default delayTime is 0
seconds (no delay). When the delay time is changed, the implementation must make
the transition smoothly, without introducing noticeable clicks or glitches to
the audio stream.

4.8.1. Attributes

delayTime

An AudioParam object representing the amount of delay (in seconds)
to apply. Its default value is 0 (no
delay). The minimum value is 0 and the maximum value is determined by the maxDelayTime
argument to the AudioContext method createDelay. This parameter is a-rate

4.9. The AudioBuffer Interface

This interface represents a memory-resident audio asset (for one-shot sounds
and other short audio clips). Its format is non-interleaved IEEE 32-bit linear PCM with a
nominal range of -1 -> +1. It can contain one or more channels. Typically, it would be expected that the length
of the PCM data would be fairly short (usually somewhat less than a minute).
For longer sounds, such as music soundtracks, streaming should be used with the
audio element and MediaElementAudioSourceNode.

4.9.2. Methods and Parameters

The getChannelData method

Returns the Float32Array representing the PCM audio data for the specific channel.

The channel parameter is an index
representing the particular channel to get data for. An index value of 0 represents
the first channel. This index value MUST be less than numberOfChannels
or an exception will be thrown.

4.10. The AudioBufferSourceNode Interface

This interface represents an audio source from an in-memory audio asset in
an AudioBuffer. It is useful for playing short audio assets
which require a high degree of scheduling flexibility (can playback in
rhythmically perfect ways). The start() method is used to schedule when
sound playback will happen. The playback will stop automatically when
the buffer's audio data has been completely
played (if the loop attribute is false), or when the stop()
method has been called and the specified time has been reached. Please see more
details in the start() and stop() description. start() and stop() may not be issued
multiple times for a given
AudioBufferSourceNode.

numberOfInputs : 0
numberOfOutputs : 1

The number of channels of the output always equals the number of channels of the AudioBuffer
assigned to the .buffer attribute, or is one channel of silence if .buffer is NULL.

4.10.1. Attributes

The speed at which to render the audio stream. Its default
value is 1. This parameter is a-rate

loop

Indicates if the audio data should play in a loop. The default value is false.

loopStart

An optional value in seconds where looping should begin if the loop attribute is true.
Its default value is 0, and it may usefully be set to any value between 0 and the duration of the buffer.

loopEnd

An optional value in seconds where looping should end if the loop attribute is true.
Its default value is 0, and it may usefully be set to any value between 0 and the duration of the buffer.

onended

A property used to set the EventHandler (described in HTML)
for the ended event that is dispatched to AudioBufferSourceNode
node types. When the playback of the buffer for an AudioBufferSourceNode
is finished, an event of type Event (described in HTML)
will be dispatched to the event handler.

4.10.2. Methods and
Parameters

The start method

Schedules a sound to playback at an exact time.

The when parameter describes at what time (in
seconds) the sound should start playing. It is in the same
time coordinate system as AudioContext.currentTime. If 0 is passed in for
this value or if the value is less than currentTime, then the
sound will start playing immediately. start may only be called one time
and must be called before stop is called or an exception will be thrown.

The offset parameter describes
the offset time in the buffer (in seconds) where playback will begin. If 0 is passed
in for this value, then playback will start from the beginning of the buffer.

The duration parameter
describes the duration of the portion (in seconds) to be played. If this parameter is not passed,
the duration will be equal to the total duration of the AudioBuffer minus the offset parameter.
Thus if neither offset nor duration are specified then the implied duration is
the total duration of the AudioBuffer.

The stop method

Schedules a sound to stop playback at an exact time.

The when parameter
describes at what time (in seconds) the sound should stop playing.
It is in the same time coordinate system as AudioContext.currentTime.
If 0 is passed in for this value or if the value is less than
currentTime, then the sound will stop playing immediately.
stop must only be called one time and only after a call to start or stop,
or an exception will be thrown.

4.10.3. Looping

If the loop attribute is true when start() is called, then playback will continue indefinitely
until stop() is called and the stop time is reached. We'll call this "loop" mode. Playback always starts at the point in the buffer indicated
by the offset argument of start(), and in loop mode will continue playing until it reaches the actualLoopEnd position
in the buffer (or the end of the buffer), at which point it will wrap back around to the actualLoopStart position in the buffer, and continue
playing according to this pattern.

In loop mode then the actual loop points are calculated as follows from the loopStart and loopEnd attributes:

Note that the default values for loopStart and loopEnd are both 0, which indicates that looping should occur from the very start
Note that the default values for loopStart and loopEnd are both 0, which indicates that looping should occur from the very start
to the very end of the buffer.

Please note that as a low-level implementation detail, the AudioBuffer is at a specific sample-rate (usually the same as the AudioContext sample-rate), and
that the loop times (in seconds) must be converted to the appropriate sample-frame positions in the buffer according to this sample-rate.

4.11. The MediaElementAudioSourceNode
Interface

This interface represents an audio source from an audio or
video element.

numberOfInputs : 0
numberOfOutputs : 1

The number of channels of the output corresponds to the number of channels of the media referenced by the HTMLMediaElement.
Thus, changes to the media element's .src attribute can change the number of channels output by this node.
If the .src attribute is not set, then the number of channels output will be one silent channel.

The number of channels of the single output equals the number of channels of the audio referenced by
the HTMLMediaElement passed in as the argument to createMediaElementSource(), or is 1 if the HTMLMediaElement
has no audio.

The HTMLMediaElement must behave in an identical fashion after the MediaElementAudioSourceNode has
been created, except that the rendered audio will no longer be heard directly, but instead will be heard
as a consequence of the MediaElementAudioSourceNode being connected through the routing graph. Thus pausing, seeking,
volume, .src attribute changes, and other aspects of the HTMLMediaElement must behave as they normally would
if not used with a MediaElementAudioSourceNode.

The ScriptProcessorNode is constructed with a bufferSize which
must be one of the following values: 256, 512, 1024, 2048, 4096, 8192, 16384.
This value controls how frequently the audioprocess event is
dispatched and how many sample-frames need to be processed each call.
audioprocess events are only dispatched if the
ScriptProcessorNode
has at least one input or one output connected.
Lower numbers for bufferSize will result in a lower (better)
latency. Higher numbers will be necessary to avoid
audio breakup and glitches.
This value will be picked by the implementation if the bufferSize argument
to createScriptProcessor is not passed in, or is set to 0.

numberOfInputChannels and numberOfOutputChannels
determine the number of input and output channels. It is invalid for both
numberOfInputChannels and numberOfOutputChannels to
be zero.

4.12.1. Attributes

onaudioprocess

A property used to set the EventHandler (described in HTML)
for the audioprocess event that is dispatched to ScriptProcessorNode
node types. An event of type AudioProcessingEvent
will be dispatched to the event handler.

bufferSize

The size of the buffer (in sample-frames) which needs to be
processed each time onaudioprocess is called. Legal values
are (256, 512, 1024, 2048, 4096, 8192, 16384).

4.13. The AudioProcessingEvent Interface

The event handler processes audio from the input (if any) by accessing the
audio data from the inputBuffer attribute. The audio data which is
the result of the processing (or the synthesized data if there are no inputs)
is then placed into the outputBuffer.

4.13.1. Attributes

playbackTime

The time when the audio will be played in the same time coordinate system as AudioContext.currentTime.
playbackTime allows for very tight synchronization between
processing directly in JavaScript with the other events in the context's
rendering graph.

inputBuffer

An AudioBuffer containing the input audio data. It will have a number of channels equal to the numberOfInputChannels parameter
of the createScriptProcessor() method. This AudioBuffer is only valid while in the scope of the onaudioprocess
function. Its values will be meaningless outside of this scope.

outputBuffer

An AudioBuffer where the output audio data should be written. It will have a number of channels equal to the
numberOfOutputChannels parameter of the createScriptProcessor() method.
Script code within the scope of the onaudioprocess function is expected to modify the
Float32Array arrays representing channel data in this AudioBuffer.
Any script modifications to this AudioBuffer outside of this scope will not produce any audible effects.

An exponential distance model which calculates distanceGain according to:

pow(distance / refDistance, -rolloffFactor)

refDistance

A reference distance for reducing volume as source move further from
the listener. The default value is 1.

maxDistance

The maximum distance between source and listener, after which the
volume will not be reduced any further. The default value is 10000.

rolloffFactor

Describes how quickly the volume is reduced as source moves away
from listener. The default value is 1.

coneInnerAngle

A parameter for directional audio sources, this is an angle, inside
of which there will be no volume reduction. The default value is 360.

coneOuterAngle

A parameter for directional audio sources, this is an angle, outside
of which the volume will be reduced to a constant value of
coneOuterGain. The default value is 360.

coneOuterGain

A parameter for directional audio sources, this is the amount of
volume reduction outside of the coneOuterAngle. The default value is 0.

4.14.3. Methods and Parameters

The setPosition method

Sets the position of the audio source relative to the
listener attribute. A 3D cartesian coordinate system is used.

The x, y, z parameters represent the coordinates
in 3D space.

The default value is (0,0,0)

The setOrientation method

Describes which direction the audio source is pointing in the 3D
cartesian coordinate space. Depending on how directional the sound is
(controlled by the cone attributes), a sound pointing away from
the listener can be very quiet or completely silent.

The x, y, z parameters represent a direction
vector in 3D space.

The default value is (1,0,0)

The setVelocity method

Sets the velocity vector of the audio source. This vector controls
both the direction of travel and the speed in 3D space. This velocity
relative to the listener's velocity is used to determine how much doppler
shift (pitch change) to apply. The units used for this vector is meters / second
and is independent of the units used for position and orientation vectors.

4.15. The AudioListener Interface

This interface represents the position and orientation of the person
listening to the audio scene. All PannerNode objects
spatialize in relation to the AudioContext's listener. See this section for more details about
spatialization.

4.15.1. Attributes

A constant used to determine the amount of pitch shift to use when
rendering a doppler effect. The default value is 1.

speedOfSound

The speed of sound used for calculating doppler shift. The default
value is 343.3.

4.15.2. Methods and Parameters

The setPosition method

Sets the position of the listener in a 3D cartesian coordinate
space. PannerNode objects use this position relative to
individual audio sources for spatialization.

The x, y, z parameters represent
the coordinates in 3D space.

The default value is (0,0,0)

The setOrientation method

Describes which direction the listener is pointing in the 3D
cartesian coordinate space. Both a front vector and an up
vector are provided. In simple human terms, the front vector represents which
direction the person's nose is pointing. The up vector represents the
direction the top of a person's head is pointing. These values are expected to
be linearly independent (at right angles to each other). For normative requirements
of how these values are to be interpreted, see the
spatialization section.

The x, y, z parameters represent
a front direction vector in 3D space, with the default value being (0,0,-1)

The xUp, yUp, zUp parameters
represent an up direction vector in 3D space, with the default value being (0,1,0)

The setVelocity method

Sets the velocity vector of the listener. This vector controls both
the direction of travel and the speed in 3D space. This velocity relative to
an audio source's velocity is used to determine how much doppler shift
(pitch change) to apply. The units used for this vector is meters / second
and is independent of the units used for position and orientation vectors.

4.16.1. Attributes

buffer

A mono, stereo, or 4-channel AudioBuffer containing the (possibly multi-channel) impulse response
used by the ConvolverNode. This AudioBuffer must be of the same sample-rate as the AudioContext or an exception will
be thrown. At the time when this attribute is set, the buffer and the state of the normalize
attribute will be used to configure the ConvolverNode with this impulse response having the given normalization.
The initial value of this attribute is null.

normalize

Controls whether the impulse response from the buffer will be scaled
by an equal-power normalization when the buffer atttribute
is set. Its default value is true in order to achieve a more
uniform output level from the convolver when loaded with diverse impulse
responses. If normalize is set to false, then
the convolution will be rendered with no pre-processing/scaling of the
impulse response. Changes to this value do not take effect until the next time
the buffer attribute is set.

If the normalize attribute is false when the buffer attribute is set then the
ConvolverNode will perform a linear convolution given the exact impulse response contained within the buffer.

Otherwise, if the normalize attribute is true when the buffer attribute is set then the
ConvolverNode will first perform a scaled RMS-power analysis of the audio data contained within buffer to calculate a
normalizationScale given this algorithm:

During processing, the ConvolverNode will then take this calculated normalizationScale value and multiply it by the result of the linear convolution
resulting from processing the input with the impulse response (represented by the buffer) to produce the
final output. Or any mathematically equivalent operation may be used, such as pre-multiplying the
input by normalizationScale, or pre-multiplying a version of the impulse-response by normalizationScale.

4.17. The AnalyserNode Interface

This interface represents a node which is able to provide real-time
frequency and time-domain analysis
information. The audio stream will be passed un-processed from input to output.

4.17.1. Attributes

The size of the FFT used for frequency-domain analysis. This must be
a non-zero power of two in the range 32 to 2048, otherwise an INDEX_SIZE_ERR exception MUST be thrown.
The default value is 2048.

frequencyBinCount

Half the FFT size.

minDecibels

The minimum power value in the scaling range for the FFT analysis
data for conversion to unsigned byte values.
The default value is -100.
If the value of this attribute is set to a value more than or equal to maxDecibels,
an INDEX_SIZE_ERR exception MUST be thrown.

maxDecibels

The maximum power value in the scaling range for the FFT analysis
data for conversion to unsigned byte values.
The default value is -30.
If the value of this attribute is set to a value less than or equal to minDecibels,
an INDEX_SIZE_ERR exception MUST be thrown.

smoothingTimeConstant

A value from 0 -> 1 where 0 represents no time averaging
with the last analysis frame.
The default value is 0.8.
If the value of this attribute is set to a value less than 0 or more than 1,
an INDEX_SIZE_ERR exception MUST be thrown.

4.17.2. Methods and Parameters

The getFloatFrequencyData
method

Copies the current frequency data into the passed floating-point
array. If the array has fewer elements than the frequencyBinCount, the
excess elements will be dropped. If the array has more elements than
the frequencyBinCount, the excess elements will be ignored.

The array parameter is where
frequency-domain analysis data will be copied.

The getByteFrequencyData
method

Copies the current frequency data into the passed unsigned byte
array. If the array has fewer elements than the frequencyBinCount, the
excess elements will be dropped. If the array has more elements than
the frequencyBinCount, the excess elements will be ignored.

The array parameter is where
frequency-domain analysis data will be copied.

The getByteTimeDomainData
method

Copies the current time-domain (waveform) data into the passed
unsigned byte array. If the array has fewer elements than the
fftSize, the excess elements will be dropped. If the array has more
elements than fftSize, the excess elements will be ignored.

The array parameter is where time-domain
analysis data will be copied.

4.18. The ChannelSplitterNode Interface

The ChannelSplitterNode is for use in more advanced
applications and would often be used in conjunction with ChannelMergerNode.

This interface represents an AudioNode for accessing the individual channels
of an audio stream in the routing graph. It has a single input, and a number of
"active" outputs which equals the number of channels in the input audio stream.
For example, if a stereo input is connected to an
ChannelSplitterNode then the number of active outputs will be two
(one from the left channel and one from the right). There are always a total
number of N outputs (determined by the numberOfOutputs parameter to the AudioContext method createChannelSplitter()),
The default number is 6 if this value is not provided. Any outputs
which are not "active" will output silence and would typically not be connected
to anything.

Example:

Please note that in this example, the splitter does not interpret the channel identities (such as left, right, etc.), but
simply splits out channels in the order that they are input.

One application for ChannelSplitterNode is for doing "matrix
mixing" where individual gain control of each channel is desired.

Web IDL

interface ChannelSplitterNode : AudioNode {
};

4.19. The ChannelMergerNode Interface

The ChannelMergerNode is for use in more advanced applications
and would often be used in conjunction with ChannelSplitterNode.

numberOfInputs : Variable N (default to 6) // number of connected inputs may be less than this
numberOfOutputs : 1
channelCountMode = "max";
channelInterpretation = "speakers";

This interface represents an AudioNode for combining channels from multiple
audio streams into a single audio stream. It has a variable number of inputs (defaulting to 6), but not all of them
need be connected. There is a single output whose audio stream has a number of
channels equal to the sum of the numbers of channels of all the connected
inputs. For example, if an ChannelMergerNode has two connected
inputs (both stereo), then the output will be four channels, the first two from
the first input and the second two from the second input. In another example
with two connected inputs (both mono), the output will be two channels
(stereo), with the left channel coming from the first input and the right
channel coming from the second input.

Example:

Please note that in this example, the merger does not interpret the channel identities (such as left, right, etc.), but
simply combines channels in the order that they are input.

Be aware that it is possible to connect an ChannelMergerNode
in such a way that it outputs an audio stream with a large number of channels
greater than the maximum supported by the audio hardware. In this case where such an output is connected
to the AudioContext .destination (the audio hardware), then the extra channels will be ignored.
Thus, the ChannelMergerNode should be used in situations where the number
of channels is well understood.

Web IDL

interface ChannelMergerNode : AudioNode {
};

4.20. The DynamicsCompressorNode Interface

DynamicsCompressorNode is an AudioNode processor implementing a dynamics
compression effect.

Dynamics compression is very commonly used in musical production and game
audio. It lowers the volume of the loudest parts of the signal and raises the
volume of the softest parts. Overall, a louder, richer, and fuller sound can be
achieved. It is especially important in games and musical applications where
large numbers of individual sounds are played simultaneous to control the
overall signal level and help avoid clipping (distorting) the audio output to
the speakers.

4.20.1. Attributes

The decibel value above which the compression will start taking
effect. Its default value is -24, with a nominal range of -100 to 0.

knee

A decibel value representing the range above the threshold where the
curve smoothly transitions to the "ratio" portion. Its default value is 30, with a nominal range of 0 to 40.

ratio

The amount of dB change in input for a 1 dB change in output. Its default value is 12, with a nominal range of 1 to 20.

reduction

A read-only decibel value for metering purposes, representing the
current amount of gain reduction that the compressor is applying to the
signal. If fed no signal the value will be 0 (no gain reduction). The nominal range is -20 to 0.

attack

The amount of time (in seconds) to reduce the gain by 10dB. Its default value is 0.003, with a nominal range of 0 to 1.

release

The amount of time (in seconds) to increase the gain by 10dB. Its default value is 0.250, with a nominal range of 0 to 1.

4.21. The BiquadFilterNode Interface

BiquadFilterNode is an AudioNode processor implementing very common
low-order filters.

Low-order filters are the building blocks of basic tone controls (bass, mid,
treble), graphic equalizers, and more advanced filters. Multiple
BiquadFilterNode filters can be combined to form more complex filters. The
filter parameters such as "frequency" can be changed over time for filter
sweeps, etc. Each BiquadFilterNode can be configured as one of a number of
common filter types as shown in the IDL below. The default filter type
is "lowpass".

The filter types are briefly described below. We note that all of these
filters are very commonly used in audio processing. In terms of implementation,
they have all been derived from standard analog filter prototypes. For more
technical details, we refer the reader to the excellent reference by
Robert Bristow-Johnson.

All parameters are k-rate with the following default parameter values:

frequency

350Hz, with a nominal range of 10 to the Nyquist frequency (half the sample-rate).

4.21.1 "lowpass"

A lowpass filter
allows frequencies below the cutoff frequency to pass through and attenuates
frequencies above the cutoff. It implements a standard second-order
resonant lowpass filter with 12dB/octave rolloff.

frequency

The cutoff frequency

Q

Controls how peaked the response will be at the cutoff frequency. A
large value makes the response more peaked. Please note that for this filter type, this
value is not a traditional Q, but is a resonance value in decibels.

gain

Not used in this filter type

4.21.2 "highpass"

A highpass
filter is the opposite of a lowpass filter. Frequencies above the cutoff
frequency are passed through, but frequencies below the cutoff are attenuated.
It implements a standard second-order resonant highpass filter with
12dB/octave rolloff.

frequency

The cutoff frequency below which the frequencies are attenuated

Q

Controls how peaked the response will be at the cutoff frequency. A
large value makes the response more peaked. Please note that for this filter type, this
value is not a traditional Q, but is a resonance value in decibels.

gain

Not used in this filter type

4.21.3 "bandpass"

A bandpass
filter allows a range of frequencies to pass through and attenuates the
frequencies below and above this frequency range. It implements a
second-order bandpass filter.

4.22.1. Attributes

curve

The shaping curve used for the waveshaping effect. The input signal
is nominally within the range -1 -> +1. Each input sample within this
range will index into the shaping curve with a signal level of zero
corresponding to the center value of the curve array. Any sample value
less than -1 will correspond to the first value in the curve array. Any
sample value greater than +1 will correspond to the last value in
the curve array. The implementation must perform linear interpolation between
adjacent points in the curve. Initially the curve attribute is null, which means that
the WaveShaperNode will pass its input to its output without modification.

oversample

Specifies what type of oversampling (if any) should be used when applying the shaping curve.
The default value is "none", meaning the curve will be applied directly to the input samples.
A value of "2x" or "4x" can improve the quality of the processing by avoiding some aliasing, with
the "4x" value yielding the highest quality. For some applications, it's better to use no oversampling
in order to get a very precise shaping curve.

A value of "2x" or "4x" means that the following steps must be performed:

Up-sample the input samples to 2x or 4x the sample-rate of the AudioContext. Thus for each
processing block of 128 samples, generate 256 (for 2x) or 512 (for 4x) samples.

Apply the shaping curve.

Down-sample the result back to the sample-rate of the AudioContext. Thus taking the 256 (or 512) processed samples, generating 128 as
the final result.

The exact up-sampling and down-sampling filters are not specified, and can be tuned for sound quality (low aliasing, etc.), low latency, and performance.

4.23. The OscillatorNode Interface

OscillatorNode represents an audio source generating a periodic waveform. It can be set to
a few commonly used waveforms. Additionally, it can be set to an arbitrary periodic
waveform through the use of a PeriodicWave object.

Oscillators are common foundational building blocks in audio synthesis. An OscillatorNode will start emitting sound at the time
specified by the start() method.

Mathematically speaking, a continuous-time periodic waveform can have very high (or infinitely high) frequency information when considered
in the frequency domain. When this waveform is sampled as a discrete-time digital audio signal at a particular sample-rate,
then care must be taken to discard (filter out) the high-frequency information higher than the Nyquist frequency (half the sample-rate)
before converting the waveform to a digital form. If this is not done, then aliasing of higher frequencies (than the Nyquist frequency) will fold
back as mirror images into frequencies lower than the Nyquist frequency. In many cases this will cause audibly objectionable artifacts.
This is a basic and well understood principle of audio DSP.

There are several practical approaches that an implementation may take to avoid this aliasing.
But regardless of approach, the idealized discrete-time digital audio signal is well defined mathematically.
The trade-off for the implementation is a matter of implementation cost (in terms of CPU usage) versus fidelity to
achieving this ideal.

It is expected that an implementation will take some care in achieving this ideal, but it is reasonable to consider lower-quality,
less-costly approaches on lower-end hardware.

Both .frequency and .detune are a-rate parameters and are used together to determine a computedFrequency value:

computedFrequency(t) = frequency(t) * pow(2, detune(t) / 1200)

The OscillatorNode's instantaneous phase at each time is the time integral of computedFrequency.

4.23.1. Attributes

type

The shape of the periodic waveform. It may directly be set to any of the type constant values except for "custom".
The setPeriodicWave() method can be used to set a custom waveform, which results in this attribute
being set to "custom". The default value is "sine".

frequency

The frequency (in Hertz) of the periodic waveform. Its default value is 440. This parameter is a-rate

detune

A detuning value (in Cents) which will offset the frequency by the given amount. Its default value is 0.
This parameter is a-rate

onended

A property used to set the EventHandler (described in HTML)
for the ended event that is dispatched to OscillatorNode
node types. When the playback of the buffer for an OscillatorNode
is finished, an event of type Event (described in HTML)
will be dispatched to the event handler.

4.24. The PeriodicWave Interface

4.25. The MediaStreamAudioSourceNode
Interface

This interface represents an audio source from a MediaStream.
The first AudioMediaStreamTrack from the MediaStream will be
used as a source of audio.

numberOfInputs : 0
numberOfOutputs : 1

The number of channels of the output corresponds to the number of channels of the AudioMediaStreamTrack.
If there is no valid audio track, then the number of channels output will be one silent channel.

Web IDL

interface MediaStreamAudioSourceNode : AudioNode {
};

4.26. The MediaStreamAudioDestinationNode
Interface

This interface is an audio destination representing a MediaStream with a single AudioMediaStreamTrack.
This MediaStream is created when the node is created and is accessible via the stream attribute.
This stream can be used in a similar way as a MediaStream obtained via getUserMedia(), and
can, for example, be sent to a remote peer using the RTCPeerConnection addStream() method.

4.26.1. Attributes

stream

A MediaStream containing a single AudioMediaStreamTrack with the same number of channels
as the node itself.

6. Mixer Gain Structure

This section is informative.

Background

One of the most important considerations when dealing with audio processing
graphs is how to adjust the gain (volume) at various points. For example, in a
standard mixing board model, each input bus has pre-gain, post-gain, and
send-gains. Submix and master out busses also have gain control. The gain
control described here can be used to implement standard mixing boards as well
as other architectures.

Summing Inputs

The inputs to AudioNodes have
the ability to accept connections from multiple outputs. The input then acts as
a unity gain summing junction with each output signal being added with the
others:

In cases where the channel layouts of the outputs do not match, a mix (usually up-mix) will occur according to the mixing rules.

Gain Control

But many times, it's important to be able to control the gain for each of
the output signals. The GainNode gives this
control:

Using these two concepts of unity gain summing junctions and GainNodes,
it's possible to construct simple or complex mixing scenarios.

Example: Mixer with Send Busses

In a routing scenario involving multiple sends and submixes, explicit
control is needed over the volume or "gain" of each connection to a mixer. Such
routing topologies are very common and exist in even the simplest of electronic
gear sitting around in a basic recording studio.

Here's an example with two send mixers and a main mixer. Although possible,
for simplicity's sake, pre-gain control and insert effects are not illustrated:

This diagram is using a shorthand notation where "send 1", "send 2", and
"main bus" are actually inputs to AudioNodes, but here are represented as
summing busses, where the intersections g2_1, g3_1, etc. represent the "gain"
or volume for the given source on the given mixer. In order to expose this
gain, an GainNode is used:

7. Dynamic Lifetime

Background

In addition to allowing the creation of static routing configurations, it
should also be possible to do custom effect routing on dynamically allocated
voices which have a limited lifetime. For the purposes of this discussion,
let's call these short-lived voices "notes". Many audio applications
incorporate the ideas of notes, examples being drum machines, sequencers, and
3D games with many one-shot sounds being triggered according to game play.

In a traditional software synthesizer, notes are dynamically allocated and
released from a pool of available resources. The note is allocated when a MIDI
note-on message is received. It is released when the note has finished playing
either due to it having reached the end of its sample-data (if non-looping), it
having reached a sustain phase of its envelope which is zero, or due to a MIDI
note-off message putting it into the release phase of its envelope. In the MIDI
note-off case, the note is not released immediately, but only when the release
envelope phase has finished. At any given time, there can be a large number of
notes playing but the set of notes is constantly changing as new notes are
added into the routing graph, and old ones are released.

The audio system automatically deals with tearing-down the part of the
routing graph for individual "note" events. A "note" is represented by an
AudioBufferSourceNode, which can be directly connected to other
processing nodes. When the note has finished playing, the context will
automatically release the reference to the AudioBufferSourceNode,
which in turn will release references to any nodes it is connected to, and so
on. The nodes will automatically get disconnected from the graph and will be
deleted when they have no more references. Nodes in the graph which are
long-lived and shared between dynamic voices can be managed explicitly.
Although it sounds complicated, this all happens automatically with no extra
JavaScript handling required.

Example

Example

The low-pass filter, panner, and second gain nodes are directly connected
from the one-shot sound. So when it has finished playing the context will
automatically release them (everything within the dotted line). If there are no
longer any JavaScript references to the one-shot sound and connected nodes,
then they will be immediately removed from the graph and deleted. The streaming
source, has a global reference and will remain connected until it is explicitly
disconnected. Here's how it might look in JavaScript:

9. Channel up-mixing and down-mixing

This section is normative.

Mixer Gain Structure
describes how an input to an AudioNode can be connected from one or more outputs
of an AudioNode. Each of these connections from an output represents a stream with
a specific non-zero number of channels. An input has mixing rules for combining the channels
from all of the connections to it. As a simple example, if an input is connected from a mono output and
a stereo output, then the mono connection will usually be up-mixed to stereo and summed with
the stereo connection. But, of course, it's important to define the exact mixing rules for
every input to every AudioNode. The default mixing rules for all of the inputs have been chosen so that
things "just work" without worrying too much about the details, especially in the very common
case of mono and stereo streams. But the rules can be changed for advanced use cases, especially
multi-channel.

To define some terms, up-mixing refers to the process of taking a stream with a smaller
number of channels and converting it to a stream with a larger number of channels. down-mixing
refers to the process of taking a stream with a larger number of channels and converting it to a stream
with a smaller number of channels.

An AudioNode input use three basic pieces of information to determine how to mix all the outputs
connected to it. As part of this process it computes an internal value computedNumberOfChannels
representing the actual number of channels of the input at any given time:

The AudioNode attributes involved in channel up-mixing and down-mixing rules are defined
above. The following is a more precise specification
on what each of them mean.

channelCount is used to help compute computedNumberOfChannels.

channelCountMode determines how computedNumberOfChannels will be computed.
Once this number is computed, all of the connections will be up or down-mixed to that many channels. For most nodes,
the default value is "max".

“max”: computedNumberOfChannels is computed as the maximum of the number of channels of all connections.
In this mode channelCount is ignored.

“clamped-max”: same as “max” up to a limit of the channelCount

“explicit”: computedNumberOfChannels is the exact value as specified in channelCount

channelInterpretation determines how the individual channels will be treated.
For example, will they be treated as speakers having a specific layout, or will they
be treated as simple discrete channels? This value influences exactly how the up and down mixing is
performed. The default value is "speakers".

11. Spatialization / Panning

Background

A common feature requirement for modern 3D games is the ability to
dynamically spatialize and move multiple audio sources in 3D space. Game audio
engines such as OpenAL, FMOD, Creative's EAX, Microsoft's XACT Audio, etc. have
this ability.

Using an PannerNode, an audio stream can be spatialized or
positioned in space relative to an AudioListener. An AudioContext will contain a
single AudioListener. Both panners and listeners have a position
in 3D space using a right-handed cartesian coordinate system.
The units used in the coordinate system are not defined, and do not need to be
because the effects calculated with these coordinates are independent/invariant
of any particular units such as meters or feet. PannerNode
objects (representing the source stream) have an orientation
vector representing in which direction the sound is projecting. Additionally,
they have a sound cone representing how directional the sound is.
For example, the sound could be omnidirectional, in which case it would be
heard anywhere regardless of its orientation, or it can be more directional and
heard only if it is facing the listener. AudioListener objects
(representing a person's ears) have an orientation and
up vector representing in which direction the person is facing.
Because both the source stream and the listener can be moving, they both have a
velocity vector representing both the speed and direction of
movement. Taken together, these two velocities can be used to generate a
doppler shift effect which changes the pitch.

During rendering, the PannerNode calculates an azimuth
and elevation. These values are used internally by the implementation in
order to render the spatialization effect. See the Panning Algorithm section
for details of how these values are used.

The following algorithm must be used to calculate the azimuth
and elevation:

This requires a set of HRTF impulse responses recorded at a variety of
azimuths and elevations. There are a small number of open/free impulse
responses available. The implementation requires a highly optimized
convolution function. It is somewhat more costly than "equal-power", but
provides a more spatialized sound.

Distance Effects

Sounds which are closer are louder, while sounds further away are quieter.
Exactly how a sound's volume changes according to distance from the listener
depends on the distanceModel attribute.

During audio rendering, a distance value will be calculated based on the panner and listener positions according to:

v = panner.position - listener.position

distance = sqrt(dot(v, v))

distance will then be used to calculate distanceGain which depends
on the distanceModel attribute. See the distanceModel section for details of
how this is calculated for each distance model.

As part of its processing, the PannerNode scales/multiplies the input audio signal by distanceGain
to make distant sounds quieter and nearer ones louder.

Sound Cones

The listener and each sound source have an orientation vector describing
which way they are facing. Each sound source's sound projection characteristics
are described by an inner and outer "cone" describing the sound intensity as a
function of the source/listener angle from the source's orientation vector.
Thus, a sound source pointing directly at the listener will be louder than if
it is pointed off-axis. Sound sources can also be omni-directional.

The following algorithm must be used to calculate the gain contribution due
to the cone effect, given the source (the PannerNode) and the listener:

Doppler Shift

The following algorithm must be used to calculate the doppler shift value which is used
as an additional playback rate scalar for all AudioBufferSourceNodes connecting directly or
indirectly to the AudioPannerNode:

12. Linear Effects using Convolution

Background

Convolution is a
mathematical process which can be applied to an audio signal to achieve many
interesting high-quality linear effects. Very often, the effect is used to
simulate an acoustic space such as a concert hall, cathedral, or outdoor
amphitheater. It can also be used for complex filter effects, like a muffled
sound coming from inside a closet, sound underwater, sound coming through a
telephone, or playing through a vintage speaker cabinet. This technique is very
commonly used in major motion picture and music production and is considered to
be extremely versatile and of high quality.

Each unique effect is defined by an impulse response. An
impulse response can be represented as an audio file and can be recorded from a real acoustic
space such as a cave, or can be synthetically generated through a great variety
of techniques.

Motivation for use as a Standard

A key feature of many game audio engines (OpenAL, FMOD, Creative's EAX,
Microsoft's XACT Audio, etc.) is a reverberation effect for simulating the
sound of being in an acoustic space. But the code used to generate the effect
has generally been custom and algorithmic (generally using a hand-tweaked set
of delay lines and allpass filters which feedback into each other). In nearly
all cases, not only is the implementation custom, but the code is proprietary
and closed-source, each company adding its own "black magic" to achieve its
unique quality. Each implementation being custom with a different set of
parameters makes it impossible to achieve a uniform desired effect. And the
code being proprietary makes it impossible to adopt a single one of the
implementations as a standard. Additionally, algorithmic reverberation effects
are limited to a relatively narrow range of different effects, regardless of
how the parameters are tweaked.

A convolution effect solves these problems by using a very precisely defined
mathematical algorithm as the basis of its processing. An impulse response
represents an exact sound effect to be applied to an audio stream and is easily
represented by an audio file which can be referenced by URL. The range of
possible effects is enormous.

Implementation Guide

Linear convolution can be implemented efficiently.
Here are some notes
describing how it can be practically implemented.

Reverb Effect (with matrixing)

This section is normative.

In the general case the source
has N input channels, the impulse response has K channels, and the playback
system has M output channels. Thus it's a matter of how to matrix these
channels to achieve the final result.

The subset of N, M, K below must be implemented (note that the first image in the diagram is just illustrating
the general case and is not normative, while the following images are normative).
Without loss of generality, developers desiring more complex and arbitrary matrixing can use multiple ConvolverNode
objects in conjunction with an ChannelMergerNode.

Single channel convolution operates on a mono audio input, using a mono
impulse response, and generating a mono output. But to achieve a more spacious sound, 2 channel audio
inputs and 1, 2, or 4 channel impulse responses will be considered. The following diagram, illustrates the
common cases for stereo playback where N and M are 1 or 2 and K is 1, 2, or 4.

Recording Impulse Responses

This section is informative.

The most modern and
accurate way to record the impulse response of a real acoustic space is to use
a long exponential sine sweep. The test-tone can be as long as 20 or 30
seconds, or longer.
Several recordings of the test tone played through a speaker can be made with
microphones placed and oriented at various positions in the room. It's
important to document speaker placement/orientation, the types of microphones,
their settings, placement, and orientations for each recording taken.

Post-processing is required for each of these recordings by performing an
inverse-convolution with the test tone, yielding the impulse response of the
room with the corresponding microphone placement. These impulse responses are
then ready to be loaded into the convolution reverb engine to re-create the
sound of being in the room.

Tools

Two command-line tools have been written: generate_testtones generates an exponential sine-sweep test-tone
and its inverse. Another tool convolve was written for
post-processing. With these tools, anybody with recording equipment can record
their own impulse responses. To test the tools in practice, several recordings
were made in a warehouse space with interesting acoustics. These were later
post-processed with the command-line tools.

Recording Setup

Audio Interface: Metric Halo Mobile I/O 2882

Microphones: AKG 414s, Speaker: Mackie HR824

The Warehouse Space

13. JavaScript Synthesis and Processing

This section is informative.

The Mozilla project has conducted Experiments to synthesize
and process audio directly in JavaScript. This approach is interesting for a
certain class of audio processing and they have produced a number of impressive
demos. This specification includes a means of synthesizing and processing
directly using JavaScript by using a special subtype of AudioNode called ScriptProcessorNode.

Here are some interesting examples where direct JavaScript processing can be
useful:

Custom DSP Effects

Unusual and interesting custom audio processing can be done directly in JS.
It's also a good test-bed for prototyping new algorithms. This is an extremely
rich area.

Educational Applications

JS processing is ideal for illustrating concepts in computer music synthesis
and processing, such as showing the de-composition of a square wave into its
harmonic components, FM synthesis techniques, etc.

JavaScript Performance

JavaScript has a variety of performance issues so it is not
suitable for all types of audio processing. The approach proposed in this
document includes the ability to perform computationally intensive aspects of
the audio processing (too expensive for JavaScript to compute in real-time)
such as multi-source 3D spatialization and convolution in optimized C++ code.
Both direct JavaScript processing and C++ optimized code can be combined due to
the APIs modular approach.

15. Performance Considerations

15.1. Latency: What it is and Why it's Important

For web applications, the time delay between mouse and keyboard events
(keydown, mousedown, etc.) and a sound being heard is important.

This time delay is called latency and is caused by several factors (input
device latency, internal buffering latency, DSP processing latency, output
device latency, distance of user's ears from speakers, etc.), and is
cummulative. The larger this latency is, the less satisfying the user's
experience is going to be. In the extreme, it can make musical production or
game-play impossible. At moderate levels it can affect timing and give the
impression of sounds lagging behind or the game being non-responsive. For
musical applications the timing problems affect rhythm. For gaming, the timing
problems affect precision of gameplay. For interactive applications, it
generally cheapens the users experience much in the same way that very low
animation frame-rates do. Depending on the application, a reasonable latency
can be from as low as 3-6 milliseconds to 25-50 milliseconds.

15.2. Audio Glitching

Audio glitches are caused by an interruption of the normal continuous audio
stream, resulting in loud clicks and pops. It is considered to be a
catastrophic failure of a multi-media system and must be avoided. It can be
caused by problems with the threads responsible for delivering the audio stream
to the hardware, such as scheduling latencies caused by threads not having the
proper priority and time-constraints. It can also be caused by the audio DSP
trying to do more work than is possible in real-time given the CPU's speed.

15.3. Hardware Scalability

The system should gracefully degrade to allow audio processing under
resource constrained conditions without dropping audio frames.

First of all, it should be clear that regardless of the platform, the audio
processing load should never be enough to completely lock up the machine.
Second, the audio rendering needs to produce a clean, un-interrupted audio
stream without audible glitches.

The system should be able to run on a range of hardware, from mobile phones
and tablet devices to laptop and desktop computers. But the more limited
compute resources on a phone device make it necessary to consider techniques to
scale back and reduce the complexity of the audio rendering. For example,
voice-dropping algorithms can be implemented to reduce the total number of
notes playing at any given time.

Here's a list of some techniques which can be used to limit CPU usage:

15.3.1. CPU monitoring

In order to avoid audio breakup, CPU usage must remain below 100%.

The relative CPU usage can be dynamically measured for each AudioNode (and
chains of connected nodes) as a percentage of the rendering time quantum. In a
single-threaded implementation, overall CPU usage must remain below 100%. The
measured usage may be used internally in the implementation for dynamic
adjustments to the rendering. It may also be exposed through a
cpuUsage attribute of AudioNode for use by
JavaScript.

In cases where the measured CPU usage is near 100% (or whatever threshold is
considered too high), then an attempt to add additional AudioNodes
into the rendering graph can trigger voice-dropping.

15.3.2. Voice Dropping

Voice-dropping is a technique which limits the number of voices (notes)
playing at the same time to keep CPU usage within a reasonable range. There can
either be an upper threshold on the total number of voices allowed at any given
time, or CPU usage can be dynamically monitored and voices dropped when CPU
usage exceeds a threshold. Or a combination of these two techniques can be
applied. When CPU usage is monitored for each voice, it can be measured all the
way from a source node through any effect processing nodes which apply
uniquely to that voice.

When a voice is "dropped", it needs to happen in such a way that it doesn't
introduce audible clicks or pops into the rendered audio stream. One way to
achieve this is to quickly fade-out the rendered audio for that voice before
completely removing it from the rendering graph.

When it is determined that one or more voices must be dropped, there are
various strategies for picking which voice(s) to drop out of the total ensemble
of voices currently playing. Here are some of the factors which can be used in
combination to help with this decision:

Older voices, which have been playing the longest can be dropped instead
of more recent voices.

Quieter voices, which are contributing less to the overall mix may be
dropped instead of louder ones.

Voices which are consuming relatively more CPU resources may be dropped
instead of less "expensive" voices.

An AudioNode can have a priority attribute to help determine
the relative importance of the voices.

15.3.3. Simplification of Effects
Processing

Most of the effects described in this document are relatively inexpensive
and will likely be able to run even on the slower mobile devices. However, the
convolution effect can be configured with
a variety of impulse responses, some of which will likely be too heavy for
mobile devices. Generally speaking, CPU usage scales with the length of the
impulse response and the number of channels it has. Thus, it is reasonable to
consider that impulse responses which exceed a certain length will not be
allowed to run. The exact limit can be determined based on the speed of the
device. Instead of outright rejecting convolution with these long responses, it
may be interesting to consider truncating the impulse responses to the maximum
allowed length and/or reducing the number of channels of the impulse response.

In addition to the convolution effect. The PannerNode may also be
expensive if using the HRTF panning model. For slower devices, a cheaper
algorithm such as EQUALPOWER can be used to conserve compute resources.

15.3.4. Sample Rate

For very slow devices, it may be worth considering running the rendering at
a lower sample-rate than normal. For example, the sample-rate can be reduced
from 44.1KHz to 22.05KHz. This decision must be made when the
AudioContext is created, because changing the sample-rate
on-the-fly can be difficult to implement and will result in audible glitching
when the transition is made.

15.3.5. Pre-flighting

It should be possible to invoke some kind of "pre-flighting" code (through
JavaScript) to roughly determine the power of the machine. The JavaScript code
can then use this information to scale back any more intensive processing it
may normally run on a more powerful machine. Also, the underlying
implementation may be able to factor in this information in the voice-dropping
algorithm.

TODO: add specification and more detail here

15.3.6. Authoring for different
user agents

JavaScript code can use information about user-agent to scale back any more
intensive processing it may normally run on a more powerful machine.

15.3.7. Scalability of
Direct JavaScript Synthesis / Processing

Any audio DSP / processing code done directly in JavaScript should also be
concerned about scalability. To the extent possible, the JavaScript code itself
needs to monitor CPU usage and scale back any more ambitious processing when
run on less powerful devices. If it's an "all or nothing" type of processing,
then user-agent check or pre-flighting should be done to avoid generating an
audio stream with audio breakup.

15.4. JavaScript Issues with real-time
Processing and Synthesis:

While processing audio in JavaScript, it is extremely challenging to get
reliable, glitch-free audio while achieving a reasonably low-latency,
especially under heavy processor load.

JavaScript is very much slower than heavily optimized C++ code and is not
able to take advantage of SSE optimizations and multi-threading which is
critical for getting good performance on today's processors. Optimized
native code can be on the order of twenty times faster for processing FFTs
as compared with JavaScript. It is not efficient enough for heavy-duty
processing of audio such as convolution and 3D spatialization of large
numbers of audio sources.

setInterval() and XHR handling will steal time from the audio processing.
In a reasonably complex game, some JavaScript resources will be needed for
game physics and graphics. This creates challenges because audio rendering
is deadline driven (to avoid glitches and get low enough latency).

JavaScript does not run in a real-time processing thread and thus can be
pre-empted by many other threads running on the system.

16. Example Applications

Here are some of the types of applications a web audio system should be able
to support:

Basic Sound Playback

Simple and low-latency
playback of sound effects in response to simple user actions such as mouse
click, roll-over, key press.

3D Environments and Games

Electronic Arts has produced an impressive immersive game called
Strike Fortress,
taking advantage of 3D spatialization and convolution for room simulation.

3D environments with audio are common in games made for desktop applications
and game consoles. Imagine a 3D island environment with spatialized audio,
seagulls flying overhead, the waves crashing against the shore, the crackling
of the fire, the creaking of the bridge, and the rustling of the trees in the
wind. The sounds can be positioned naturally as one moves through the scene.
Even going underwater, low-pass filters can be tweaked for just the right
underwater sound.

Box2D is an interesting open-source
library for 2D game physics. It has various implementations, including one
based on Canvas 2D. A demo has been created with dynamic sound effects for each
of the object collisions, taking into account the velocities vectors and
positions to spatialize the sound events, and modulate audio effect parameters
such as filter cutoff.

A virtual pool game with multi-sampled sound effects has also been created.

Musical Applications

Many music composition and production applications are possible. Applications
requiring tight scheduling of audio events can be implemented and can be both
educational and entertaining. Drum machines, digital DJ applications, and even
timeline-based digital music production software with some of the features of
GarageBand can be
written.

Music Visualizers

When combined with WebGL GLSL shaders, realtime analysis data can be presented
in entertaining ways. These can be as advanced as any found in iTunes.

Educational Applications

A variety of educational applications can be written, illustrating concepts
in music theory and computer music synthesis and processing.

Artistic Audio Exploration

There are many creative possibilites for artistic sonic environments for
installation pieces.

17. Security Considerations

This section is informative.

18. Privacy Considerations

This section is informative. When giving various information on
available AudioNodes, the Web Audio API potentially exposes information on
characteristic features of the client (such as audio hardware sample-rate) to
any page that makes use of the AudioNode interface. Additionally, timing
information can be collected through the RealtimeAnalyzerNode or
ScriptProcessorNode interface. The information could subsequently be used to
create a fingerprint of the client.

Currently audio input is not specified in this document, but it will involve
gaining access to the client machine's audio input or microphone. This will
require asking the user for permission in an appropriate way, probably via the
getUserMedia()
API.