Sound
is a travelling wave which is an oscillation of pressure transmitted
through a solid, liquid, or gas, composed of frequencies within the
range of hearing and of a level sufficiently strong to be heard, or the
sensation stimulated in organs of hearing by such vibrations. For
humans, hearing is normally limited to frequencies between about 12 Hz
and 20,000 Hz (20 kHz), although these limits are not definite. The
upper limit generally decreases with age. Sound waves are characterized
by the properties of waves, which are frequency, wavelength, period,
amplitude, intensity, speed, and direction (sometimes speed and
direction are combined as a velocity vector, or wavelength and
direction are combined as a wave vector). Transverse waves, also known
as shear waves, have an additional property of polarization. Our
Atmosphere that surrounds us plays a big part in our perception of
sound from moment to moment. If the atmospheric-pressure level around
us remains steady or has little change, we hear silence. If atmospheric
changes increase this pressure or other objects in your area are in
motion, the results are anything from light audible noise to deafening
thunder. To conclude all sounds that we know of are created through
atmospheric changes or objects in motion vibrating the existing
atmospheric pressure that surrounds us. A wild form of synthesis would
be to change the barometric pressure inside a symphony hall while the
orchestra was playing. Let me know if anybody can figure out how to
make this happen.

a musical
sound tone definition

All musical tones have a complex waveform, made up from loads of
different frequencies. All sounds are formed using a combination of
sine waves at varying frequencies and amplitudes. If we look at the
frequencies of a complex waveform, then the lowest frequency is called
the fundamental frequency. The fundamental frequency determines the
pitch of the sound. The higher frequencies are called overtones. If the
overtones are multiples of (x1, x2, x3 etc.) the fundamental frequency
then they are called harmonics. The overtones or upper partials as some
people like to refer to them as, must be multiples of the fundamental
to be known as harmonics. These frequencies and their amplitudes
determine the timbre of a sound.

If
you have a waveform that has a fundamental frequency of 100 kHz, then
the second harmonic will be 200 kHz and the third harmonic will be 300
kHz and so on……

If you think about the irregular waveform of
noise then you will understand that it has no harmonics. Noise, as we
discussed earlier, contains a wide band of frequencies and it is
generally accepted that, at waveform level, there are no harmonics as
the waveform is non-repeating.

Harmonics are essential when it
comes to synthesis. In acoustics and telecommunication, a harmonic of a
wave is a component frequency of the signal that is an integer multiple
of the fundamental frequency. For example, if the fundamental frequency
is f, the harmonics have frequencies f, 2f, 3f, 4f, etc. The harmonics
have the property that they are all periodic at the fundamental
frequency; therefore the sum of harmonics is also periodic at that
frequency. Harmonic frequencies are equally spaced by the width of the
fundamental frequency and can be found by repeatedly adding that
frequency. For example, if the fundamental frequency is 25 Hz, the
frequencies of the harmonics are: 25 Hz, 50 Hz, 75 Hz, 100 Hz, etc. If
we use a simple metaphor like a singing quartet, the lead vocalist
would be the fundamental frequency, while the other three vocalists are
the harmonics, harmonizing with the lead vocalist.

How
to make a sound

One
of the most
significant developments in the design of analog and
digital sound synthesis techniques was the concept of unit generators
(UGs). UGs are signal processing modules like oscillators filters and
amplifiers, which can be interconnected to form synthesis instruments
or patched together by wire in order to generate sound signals in a
circuit type scheme.

The
Oscillator

An
oscillator creates a single periodic waveform at a certain frequency.
In other words, the Oscillator is an electronic device used to generate
a
tone. In analogue
synthesis, the Osc generates a sound or waveform, usually,
a sine, saw, triangle, squar or pulse wave. The Osc generates this
waveform
continuously. The rate at which it generates one cycle is what we
regard as pitch and this is measured in Hz. On analogue
synthesizers the Oscs are
referred to as VCO, Voltage Controlled Oscillator, on modern
synthesizers that incorporate digital processing, DCO, Digitally
Controlled Oscillator, on samplers they can be referred to as samples,
voices, & waveforms.

Filter

A
filter allows the cutoff frequency and Q factor of a soundwave to be
continuously varied. Usually the filter gives a lowpass response, but
may also be switchable to allow highpass, bandpass or even notch
responses. The filter may offer a switchable slope which determines how
intensely signals outside the pass band become attenuated, usually
12dB/octave (a '2 pole' filter) or 24dB/octave (a '4 pole' filter).

In
analog synthesizers, which are commonly used to make electronic music,
VCFs are commonly positioned after the oscillator(s). The oscillator
generates an audio waveform, which (except for noise waveforms)
includes a fundamental pitch and a series of harmonic overtones. By
varying the cutoff frequency (the maximum frequency passed by the
filter), the synth operator can add or remove some of the overtones to
create more interesting and textured sounds.

In
electronic
music, "filter sweeps" have become a common effect. These sweeps are
created by varying the cutoff frequency of the VCF (sometimes very
slowly) to reveal or conceal the oscillator's overtones. Controlling
the cutoff by means of an envelope generator, especially with
relatively fast attack settings, simulates the attack transients of
natural or acoustic instruments.

A
VCF is an example of
an
active non-linear filter: however, if its control voltage is kept
constant, it will behave as a linear filter.

Envelope
Generator(ADSR
stands for Attack,Decay,Sustain & Release)

The
main purpose of the envelope generator is to control the attack of the
tone, it's decay from the high point of the initial attack, the sustain
of the note as it decays and last the final fading out or
release
of the note. The envelope generator gives the generated tone character
and molds the tone into a more musically usable note.Synthesizers
usually have 2 or more envelope generators. One will control the VCF
and the
other the VCA. When used with the VCA, the EG is used to vary the
volume of a sound to create the natural dynamic movement of a sound.
When used with the VCF, it can change the timbre of a sound over time
by controlling the cut-off frequency of the VCF.

Amplifier

A
voltage-controlled amplifier is an electronic amplifier that varies its
gain depending on a control voltage (often abbreviated CV).

VCAs
have many
applications, including audio level compression, synthesizers, and
amplitude modulation.

A
crude example is a typical inverting op-amp configuration with a
light-dependent resistor (LDR) in the feedback loop. The gain of the
amplifier then depends on the light falling on the LDR, which can be
provided by an LED (an optocouple). The gain of the amplifier is then
controllable by the current through the LED. This is similar to the
circuits used in optical audio compressors.

A
voltage-controlled
amplifier can be realised by first creating a voltage-controlled
resistor (VCR), which is used to set the amplifier gain. The VCR is one
of the numerous interesting circuit elements that can be produced by
using a JFET (junction field-effect transistor) with simple biasing.
VCRs manufactured in this way can be obtained as discrete devices.

In
audio applications logarithmic gain control is used to emulate how the
ear hears loudness. David E. Blackmer's dbx 202 VCA was among the first
successful implimentations of a logarithmic VCA.

The
Father of Modern Synthesis

Max
Vernon Mathews born November 13, 1926, in Columbus, is a pioneer in the
world of computer music & synthesis. He studied electrical
engineering at the California Institute of Technology and the
Massachusetts Institute of Technology, receiving a Sc.D. in 1954.
Working at Bell Labs, Mathews wrote MUSIC, the first widely-used
program for sound generation, in 1957. In my opinion he is the father
of modern day synthesis and it's practice through the use of electrical
and digital devices. After the public presentation of his initial
developments, dozens of synthesizing techniques are developed and
manipulated both commercially and in performance.

To
start this
online work of synthesis and not get too caught up in the historical,
lets start by understanding sound and the most current & valid
approaches to sound synthesis through the manipulation of
electronic devices and computers. As stated there are many forms of
sound synthesis and not all of them electrical or related to music. My
life has been consumed
with computer technologys, electrical components and of course, music.
I
am going to skip delving into the works of Max Mathews and take a more
current approach to the practice of modern synthesis and it's
applicable use in my own creative musical process.

The
Development of Different Types of Synthesis

The
first synthesis
approach to make use of these unit generator concepts was the MUSIC III
program by
Max Mathews and Joan Miller in 1960. By passing a sound
signal through a series of various UGs a large number of synthesis
algorithms were developed and implemented. I have never found an actual
document clarifying what algorithms were derived at during the first
executions of the MUSIC III program, so I will have to skip ahead and
start with a more current collection of concepts that make up our
current modern methods of synthesis.

Methods of
Synthesis Used by Me

Of
the forms of synthesis that are practiced in audio & music, you
will
find the rest of this page of mine is dedicated to the types of
synthesis that can be found in the equipment I use and in the music I
compose. There are many forms of audio synthesis and with the
introduction of VSTs and soft synths, many new forms or enhanced forms
of older synthesis practices are being currently developed.
Creative electronic musicians will undoubtably favor one form
or
another methods of synthesiss
and continue to develope exciting new forms of audio. As for me well I
am a hardware musician and rely on a hands on approach to making my
music. While composing, performing or recording I prefer to be in
direct contact with my equipment, rather than tweeking a software
application.

Synthesis Techniques (BASIC)

Description

Subtractive Synthesis

Frequency Modulation or FM

Wavetable
Synthesis

Wave
Sequencing

Vector Synthesis

Filtering of complex sounds to shape harmonic spectrum.

Modulating a carrier wave with one or more operators (FM)

Variating speed playback of recorded digitized waveforms.

Linear combinations of severtal small segments to create a new sounds.

Technique for fading between any number of different sound sources.

Synthesis Techniques
(ADVANCED)

Description

Granular Synthesis

Pulsar Synthesis

Filter Synthesis

Physical Modelling

Digital Waveguide Synthesis

Waveset Distortion

Combining of several small
sound
segments into a new sound

Based on
the generation of trains of sonic particles

Emu z plane
filter

Mathematical
equations of acoustic characteristics of sound

Waves
guided by Algorithms and formulasIrreversibly
altering wavesets in sound through harmoic distortions

Synthesis Techniques (OTHER)

Description

Sampling

Composite
Synthesis

Phase
Distortion

Waveshaping
Resynthesis
Direct
Digital Synthesis

Using recorded sounds as sound sources
subject to modification

Using artificial and sampled sounds to establish resultant "new" sound

Altering speed of waveforms stored in wavetables during playback

Intentional distortion of a signal to produce a modified result

Modification of digitally sampled sounds before playback

Computer modification of generated waveforms (software synthesis)

There
are many other forms of synthesis going on around us & in
nature.
The synthesis concepts stated here are those that
apply to sound & audio in the application of music. If your
interest is deeper than this I suggest a reading research into the
various forms of synthesis being applied to human and animal vocal
patterns, even further is the research being done in enviromental and
astronomical sound waves.

Synthesis
Techniques (BASIC)

SubtractiveSynthesis

This
process involves the generating of complex waveforms and then filtering
the frequencies so that you are then left with the sound you want. You
take away the frequencies. Obviously the filters are crucial in
subtractive synthesis and the better the filters and the wider the
choice of filters available, the better the end result will be.

The
basic components(UGs) needed for subtractive synthesis are as follows;
oscillator or audio wave source, filter, envelope generator &
amplifier. Lets take a moment to understand and break these components
down.

In
subtractive synthesis we use an oscillator that creates a tone,
run it through a filter to remove the frequencies we want to remove and
then adjust the volume of the sound over a period of time using the
amplifier (shape). I don’t want to get into
the electronics of the signal path of an analogue synthesizer or how a
synthesizer works. Just remember that
when synthesizers were invented they worked off a voltage path and the
voltage was controlled throughout the components. Tone is sent to the
filter & in most patched schemes the envelope shapes the
filter. If the tone is sent directly sent to the amplifier,
the ADSR
shapes the volume.
The hard wiring and circuits of these synthesizers are made so that
one EG went to the filter, which was controlled by voltage of course,
and the second was sent to the amplifier, which was controlled by
voltage as well. By using the CV input of the filter the
voltage is controlled at the input stage. The same CV input is used
as well on the amplifier. By varying the voltages at any of
these input
stages
meant we altered the shape or filter characteristics of the sound.
Today most synthesizers use digital signal processing to simplfy
subtractive synthesis for the musician.

The
output of one oscillator (modulator) is used to modulate the frequency
of another oscillator (carrier). These oscillators are called
operators. FM synthesizers usually have 4 or 6 operators. Algorithms
are predetermined combinations of routings of modulators and carriers.
To really explain this we need to go into harmonics, sidebands,
non-coincident and coincident series and the relationships between
modulators and carriers.

John
M. Chowning is known for having discovered the FM synthesis algorithm
in 1967. In FM (frequency modulation) synthesis, both the carrier
frequency and the modulation frequency are within the audio band. In
essence, the amplitude and frequency of one waveform modulates the
frequency of another waveform producing a resultant waveform that can
be periodic or non-periodic depending upon the ratio of the two
frequencies.

Chowning's
breakthrough
allowed for simple yet rich
sounding timbres, which synthesized 'metal striking' or 'bell like'
sounds, and which seemed incredibly similar to real percussion. He
spent six years turning his breakthrough into a system of musical
importance and eventually was able to simulate a large number of
musical sounds, including the singing voice. In 1973 Stanford
University licensed Chowning's discovery to Yamaha in Japan, with whom
Chowning worked in developing a family of synthesizers and electronic
organs.

The
first product to incorporate the FM algorithm was
Yamaha's GS1, a large piano sized digital synthesizer that first
shipped in 1981. Some
thought it too expensive at the time, Chowning included. Soon after,
Yamaha made their first commercially successful digital FM
synthesizers, the DX series. Along this line, it is my personal opinion
the Chowning had more to do with the DX1series, than the commercially
modified DX5, and the even more modified DX7. These modifications
resulted in a less hands on synth, like the DX1, in favor of a
complicated
programming approach found on the DX7.

Chowning
FM Theory uses the basic sine wave as both the carrier and modulating
waveform. One of the strengths of FM is the ability to do alot with two
very simple waves. Chowning FM modulations are linear, whereby the
carrier is pushed an equal number of cycles per second above and below
its center frequency. Exponential FM, where the carrier is pushed up
and down on an equal musical interval (therefore more Hz up than down)
drifting upward in its pitch axis as the modulation depth is increased
. Linear FM allows the strength of modulation to be increased without
the perceived center frequency rising.

The
Yamaha DX series of
synthesizers was built around Chownings Principles of Audio-rate
Frequency Modulation and his calculations based on the CM Ratio.The CM
Ratio is defined as the carrier frequency (Cƒ) plus and minus all the
integer multiples of the modulating frequency (Mƒ).
This approach
opened up areas of synthesis where many things could be done
to create very complex spectra with FM. The DX-7 was built around the
idea of double-carrier FM, in which a single modulator controls two
carriers, tuned differently. This allows the creation of formant (see
formant synthesis below) areas not possible with single FM. Also,
stacks of modulators, where a modulator was itself modulated, could
either produce wildly complex spectra if tuned inharmonically or
produce weighted spectra, which could create a more realist bass. This
helped with one of FM's greatest drawbacks--the strength of the upper
and lower sidebands are equal, but our human hearing requirings more
energy in the lower frequencies to be considered as equally loud as the
higher frequencies. Therefore, single FM always seemed weighted to the
treble,. Another interesting idea is to modulate the modulation index
itself, providing a rapid timbral shift. or to low-frequency modulate
the modulator or carrier, changing the C:M ratio and therefore the
frequencies of the sidebands for some very nice effects.

This
form of synthesis incorporates the use of pre recorded digitized audio
waveforms of real or synthetic instruments. The waveforms are then
stored in memory and played back at varying speeds for the
corresponding notes played. These waveforms usually have a looped
segment which allows for a sustained note to be played. Using envelopes
and modulators, these waveforms can be processed and layered to form
complex sounds that can often be lush and interesting. The processes
are algorithmic and memory is crucial to house the waveforms. Linear
crossfading sequentially, quasi-periodic and sine functions are used in
this type of synthesis.

The
PPG system was the
first of
the wavetable synthesizers, where related single-cycle waveforms were
stored
in a group of 32. The user could pick a starting waveform and then use
an envelope or LFO to move around in the wavetable, causing timbral
changes as the waveform being read out changed. Differences between
adjoining waveforms were fairly slight, so the degree of timbral change
was determined by how far and how fast the readout moved from the
original starting point.Wave
Sequencing-
Morphing the soundwave part 2 & the Sequential Circuits Prophet
VS.

The
transition from waveform to waveform in Sequential's Vector Synthesis,
first seen on the Prophet VS, was a simple crossfade, and although two
of these crossfades could be controlled or programmed by the joystick
which was so integral to the Vector Sythesis system, the maximum number
of waveforms which could be involved in a single sound was four.
However, Sequential Circuit's next development, called Wave Sequencing
-- allowed up to 255 different waves to be involved. This innovation
was introduced on the Korg Wavestation, which still featured
joystick-controlled Vector Synthesis, but added the much greater
potential for transitional synthesis than what wave sequencing gives.

Prophet
VS

The
Prophet VS used four digital wavetable oscillators based on those in
the PPG Wave as its four sound sources. The limitations, particularly
the digital aliasing, of this design, coupled with its use of Curtis
analogue filter ICs to process the mixed sound, gave the Prophet VS its
distinctive sound.

In
the case of wave sequencing, coming 10 years after wavetable synthesis,
there was much less economic restriction on memory for storing
waveforms. As a result, instead of access being limited to 32
single-cycle waveforms, full PCM samples were available, and up to 255
could be 'on-line' for use by an oscillator in a sound. Each stage in
the wave sequence could be occupied by a PCM sound radically different
from the one before or after it in the sequence. The potential for
striking sonic change is therefore much greater in wave sequencing,
especially since the PCM waveforms can be deliberately moved around by
the user to contrast as much as possible with the other PCM waveforms.Korg
Wavestation

Vector
Synthesis-
Morphing the soundwave
part 3

The
Korg Wavestation took the concepts of the Prophet VS & the
Yamaha
SY22 & went even further. The wavestation allowed each of the
four
sound sources to produce not just a
static tone, but a complex wave sequence, by playing back or
cross-fading one wave after another.

This
method of synthesis incorporates the combining and processing of
digital waveforms. Using PCM samples, effects and filtering this method
of synthesis can create stunning sounds, from lush and evolving pads to
strange stepped sequences. Korg made the famous Wavestation range of
synthesizers and these were based around the Sequential Circuits Prohet
VS. Working off a 2 dimensional envelope using an X and Y axis
(joystick) and 4 voices, this synthesizer also had wave sequencing,
playing a loopable sequence of PCM samples in a rhythmic and/or
crossfaded fashion. The idea was to be able to crossfade 2 or more
waveforms using the joystick.

There
have been a number of different implementations of vector synthesis.
These differ in what they use for the four sound sources, and what
processing is done to the sound after the vector synthesis stage. The
actual vector synthesis concept first developed by Sequential Circuits
is identical.

Synthesis
Techniques (Advanced)

Granular Synthesis

This
is the method by which tiny events of sounds (grains or clouds) are
manipulated to form new complex sounds. By using varying frequencies
and amplitudes of the sonic components, and by processing varying
sequences and durations of these grains, a new complex sound is formed.

Granular
Synthesis is a method by which sounds are broken into tiny grains which
are then redistributed and reorganised to form other sounds.

Granular
synthesis is perceived as a relatively recent development in sound
synthesis, but it can also be seen as a reflection of long-standing
ideas about the nature of sound. Quantum physics has shown that sound
can be atomically reduced to physical particles(Wiener 1964). This
physical form of sound was first envisioned by the Dutch scientist
Isaac Beeckman (Cohen 1984). He explained that sound travels through
the air as globules of sonic data. It has only been through the use of
modern computers that this form of synthesis could be practiced and
it's results appreciated. Though this is one of the newer concepts in
synthesis technology, the concept and principles are just about as old
if not older than the concepts arrived at by Max Mathews.

Later
works including
those
by Gabor (Gabor 1946) and more recently Xenakis (Xenakis 1971), Roads
(Roads 1988), and Truax (Truax 1990) has evolved the particle theory of
sound into a synthesis method whereby the natural sound particle is
imitated and magnified, referred to as a grain. The grain is then
layered with other grain, either cloned or extracted through a similar
process as the original to create different sounds and sonic textures.
The original intent of the process described by Gabor was to reduce the
amount of data required to convery an audio human communication.

Gabor's
research came into the hands of Xenakis, who recognised a musical
application for this work (Xenakis 1971). Xenakis' first works
involving granular synthesis were created with a reel to reel tape
recorder. By splicing magnetic tape into tiny segments, rearranging the
segments, and taping the new string of segments together. After
attending a seminar conducted by Xenakis on this topic, Roads began
experimenting with this idea on a computer. His first experiments were
extremely time consuming, even when rendering just a one minute mono
sound (we are not talking minutes here, nor hours, but days, usually
weeks, depending on scheduling and transferring). After reading an
article about granular synthesis written by Roads in 1978, Truax began
developping a way to create granular synthesis in real-time, first
realised in 1986. From this point on, granular synthesis has slowly
become available to a growing number of musicians and sound artists.

These
basic questions
might help you understand granular synthesis

What
is a grain?A
grain is a small piece of sonic data. In granular synthesis it will
usually have a duration between 10 to 50 ms. The grain can be broken
down into smaller components, the envelope and the contents. The
envelope is used primarily so that there is no distortion and crunching
noises at the beginning and end of the sample. The shape of the
envelope though has a significant effect on the grain. The contents of
the grain is audio. This can be derived from any source. Sine wave,
square wave, audio sample, etc.

What
is wavelet
synthesis?Wavelet
synthesis is very closely related to granular synthesis except that it
is more strict in its definition and construction. A granular synthesis
grain can be set at any length arbitrarily, whereas a wavelet derives
its "grain" length as determined by the pitch of the contents, using
the wavelet transform. The wavelet is designed to start and end at 0
phase. Wavelet synthesis can be used for better pitch shifting and
reproduction than granular synthesis, but it requires so much analysis
that it is much slower to work with in a real-time environment.

What
is grainlet
synthesis?It
is actually another name for wavelet synthesis. It is the more commonly
used term when referring to compositions created using the wavelet
transform, whereas wavelet synthesis is the more commonly used term
when referring to the analysis and reconstruction of audio. I
personally prefer to use the term wavelet synthesis for all outcomes
using the wavelet transform.

What
is Glisson
Synthesis?A
derivative of granular synthesis whereby the contents of each grain are
modified with a glissando, the gliding from one pitch to another..

Key
to all granular
techniques is the grain
envelope.
For sampled sound, a short linear attack and decay prevent clicks being
added to the sound. Changing the slope of the grain envelope, in
classic microsound practice, changes the resulting spectrum, sharper
attacks producing broader bandwidths, just as with very short grain
durations.

The diagrams
above show a grain stream of equal duration grains, producing Amplitude
Modulation with grain durations less than 50 ms. The bottom diagram
shows 3 grain streams with variable delay time between grains, the sum
of which resembles asynchronous granular synthesis.

Types
of granular synthesis

Quasi-synchronous
granular synthesis

A grain
stream of equal duration grains, producing Amplitude Modulation with
grain durations less than 50 ms. The bottom diagram shows 3 grain
streams with variable delay time between grains, the sum of which
resembles asynchronous granular synthesis.

Asynchronous
granular synthesis

Grains are
distributed stochastically with no quasi regularity.

Pitch-synchronous
granular synthesis:

Overlapping
grain envelopes designed to be synchronous with the frequency of the
grain waveform, thereby producing fewer audio artifacts.

What
is most remarkable about the technique is the relation between the
triviality of the grain (heard alone it is the merest click or 'point'
of sound) and the richness of the layered granular texture that results
from their superimposition. The grain is an example of British
physicist Dennis Gabor's idea (proposed in 1947) of the quantum of
sound, an indivisible unit of information from the psychoacoustic point
of view, on the basis of which all macro-level phenomena are based. In
another analogy to quantum physics, time is reversible at the quantum
level in that the quantum grain of sound is reversible with no change
in perceptual quality. That is, if a granular synthesis texture is
played backwards it will sound the same, just as if the direction of
the individual grain is reversed (even if it is derived from natural
sound), it sounds the same. This time invariance also permits a time
shifting of sampled environmental sound, allowing it to be slowed down
with no change in pitch. This technique is usually called granulation.

Pulsar Synthesis

A
form of
particle
synthesis whereby each grain is created as a pulsar, generated by an
impulse generator. I found very little that really demonstrates Pulsar
Synthesis. A new field of studyboth sonicly and visually. The youtube
vide was selected out of only three that I have come across. Though
this video's audio may work or maynot. The video functions perfectly on
the youtube website. Double click on the video and it will take you to
youtube.

Pulsar
synthesis is a method of electronic music synthesis based on the
generation of trains of sonic particles. PS can produce either rhythms
or tones as it criss-crosses perceptual time spans. The basic method
generates sounds similar to vintage electronic music sonorities, with
several interesting enhancements. This video performance method
combines multiple
pulsar trains with sampled sounds.
Pulsar synthesis to me is a new type of clock or metronome that could
be used in a Pure Data structure or MaxMSP, the results should be very
interesting and complex.

Filter Synthesis

The
Z-plane synthesis concept developed by Emu Systems is an element of
transitional synthesis. However, the transition does not happen between
different oscillator waveforms but in the filter section of the synth.
Z-plane synthesis was first implemented in the wittily-named Morpheus
(the name has nothing to do with the figure from Greek mythology but
refers to 'morphing', a term which means to change from one thing to
another), and its use of interpolation between two filter shapes is
very reminiscent of how the Fairlight 'merged' from one waveform to
another. Extremely complex filter shapes are created through the use of
up to eight filter components, each of which is comparable to the
traditional low-pass, band-pass or high-pass filters or parametric
equaliser bands (see Figure 2 for one configuration example). The
resulting sculpting of the sound is far more precise and subtle than in
any previous type of synthesis. In addition to the basic function of
the filter, starting by removing the high and/or low end, peaks and
notches can be placed at will anywhere across the entire audible
frequency range. The Z-plane filter is found only in some commercially
available products;emu's Morpheus, emu's ultra proteus
& emu
samplers with the EOS operating system installed, all have the Z-plane
filter.

However,
not satisfied
with being able to tailor the
most precise filter responses ever, Z-plane synthesis is then able to
interpolate smoothly between two of them. This not only allows the user
access to a myriad of additional filter responses, if the filter is
held static in any of the transition positions, but as the
interpolation can be carried out in real time, radical changes in the
filter response can be made in the course of a sound being played back,
with the 'Morph' parameter enabling the user to change backwards and
forwards at will between the starting and ending filter shapes. With
Emu's long-established modulation matrix providing a
host of possible controllers for this Morph parameter, these timbral
changes can be controlled by anything from velocity, envelopes or
wheels, through to custom Function Generators. Whilst this is all
similar in concept to controlling the cutoff frequency of a
conventional filter using an envelope or LFO, the actual results
produced are far more striking to the ear.

Once
you've managed to
get your
head round this, brace yourself, because we still haven't scratched the
surface of Z-plane synthesis. In fact, the basic Morph parameter on its
own might be thought of as X-axis synthesis. Another parameter,
Frequency Tracking, introduces the equivalent of a Y-axis into the
equation. This is the closest parameter to the conventional filter
cutoff, in that it moves the complex Morph filter up and down the
frequency range.

In
combination with the
Morph parameter,
Frequency Tracking gives two-dimensional control over the filter shape.
Unlike a conventional filter cutoff,
though, the Frequency Tracking parameter cannot be moved in real time,
but must be set at Note On (presumably because there has to be some
limit on the processing power required). This makes it suitable for
hooking to parameters like keyboard tracking and velocity, but
unavailable for controlling from aftertouch or envelopes. However, the
real-time Morph parameter allows much more radical effects than filter
cutoff movement, and thus more than makes up for the fact that you have
to fix the Frequency Tracking at Note On.

The
observant amongst
you will have spotted that I've still not mentioned the 'Z' axis that
completes Z-plane synthesis: a third parameter, Transform 2. The
function of this varies from Z-plane filter to Z-plane filter, but one
example of what it can do is increase the size of the peaks and notches
in the filter contour (similar to the individual peak which is
increased in a conventional filter by the resonance control).

The
Transform 2
parameter, like the Frequency Tracking
parameter, is also fixed at Note On, but this actually gives you more
flexibility than most traditional filtering, where there is rarely any
automatic control of resonance at all and you have to make do with the
fixed setting whatever the note played or its velocity. Not all of the
197 filter
types in the original Morpheus feature this third Transform 2
parameter, but about half do (so technically there are around 100
Z-plane filter configurations in Morpheus). All the filter
configurations are individually described in the manual, complete with
comments and suggestions for specific uses, so there's no danger that
you'll be left to yourself to try and work out where to use them
(although I find that random assignment leads to some of the most
exciting results.

You
really can make some major timbral alterations to your source waveform,
changing it almost beyond recognition. In fact, the sheer range of
filter types and the way they can be altered in performance, the
technology used to create and modify the filter contours on an
individual basis, and the resulting sonic variations in the sound, make
Z-plane synthesis a real precursor to
physical modelling (also known as virtual synthesis or acoustic
modelling). This uses shedloads of DSP power to modify source waveforms
in the same way that the physical modifiers of the real instrument
(shape and size of resonating case or vibration column, for example)
affect the input sound. Many of the Z-plane filters available in the
emu Morpheus synth are described in
these terms -- for example, F097 ("designed to make possible a set of
piano presets that sound like they were recorded with the sustain pedal
down"), or F105 ("designed to emulate some of the resonant
characteristics of an acoustic guitar body"). As such, the Morpheus
probably represents the missing link between instruments which just use
DSP to add some effects sparkle, andthose
which create the
entire
sound through raw DSP, as in physical modelling instruments such as the
Yamaha VL series or the Korg Prophecy or Z1.

Physical Modelling (PM or PHM)

This
form of synthesis simulates the physical properties of natural
instruments, or any sound, by using complex mathematical equations in
real-time. This requires huge processing power. You are not actually
creating the sound but, you are creating and controlling the process
that produces that sound. Waveguides and algorithms come into this
process heavily.

All
the other methods of synthesis I have described have parameters
involved with each type of synthesis that don't change depending on the
type of sound you're trying to get. There's a filter attack parameter
on an S&S (Sample & Synthesis) synth whether you're
trying to
produce a piano, strings, or a synth bass. There are harmonic levels on
an additive synth whether you're making a brass sound or a harpsichord.
The wave sequencing parameters on a Wavestation are always there,
whether you use them or not!

The
same is not true of
a current
multi-model synthesizer such as the Korg Prophecy/Z1 or a Yamaha
VL-series synth. Look for the same parameters you used to make a flute
sound when using the Bowed String model and you'll be out of luck: the
parameters change depending on the model you have selected. This is why
the time it takes to change patches on a modelling synth is often
perceptible, because so many different parameters need to be broken
down and re-configured. Quite often when you change models, you are
quite literally changing synths. This can make physical modelling as a
method of synthesis quite challenging to define, which is why the DSP
effects analogy is quite useful. We expect the parameters to change
when we switch a multi-effects unit from reverb to flanging or
distortion; the multi-modelling synth is the same -- only more so.
Think of changing from a tenor sax to a soprano as akin to changing
from a hall reverb to a room; changing to a violin is like selecting a
phaser effect instead. The only real difference is one of scale: the
amount of DSP power is greater in a modelling synth by at least a power
of 10 or two.

The
physical model
attempts to work out what
happens in the real world, and then uses mathematical calculations to
attempt to recreate this in software. The degree of realism achieved
depends on two things: howaccurate
is the analysis
or 'model' of
what happens in the real world, and how closely the DSP algorithms used
reproduce this analysis. If a sound designer misunders how
the
sound is produced in the real world, then -- however good his DSP code
is -- it's unlikely that he'll make a very realistic-sounding reverb or
plucked string instrument (although he may create some great new effect
or sound which can't be produced in the real world). On the other hand,
however great the understanding of the processes involved, if the sound
designer doesn't have the necessary DSP horsepower to hand he may get
into the right ballpark, but he isn't going to fool anyone that this is
a real hall or a real guitar. Physical modelling approach to synthesis
is an all out attempt to recreate that which is naturally heard through
a mathematical algorithmic process.

Digital Waveguide Synthesis

Digital
waveguide synthesis is the synthesis of audio using a digital
waveguide. Digital waveguides are efficient computational models for
physical media through which acoustic waves propagate. For this reason,
digital waveguides constitute a major part of most modern physical
modeling synthesizers.

A
lossless digital
waveguide realizes the
discrete form of d'Alembert's solution of the one-dimensional wave
equation as the superposition of a right-going wave and a left-going
wave,

y(m,n)
= y + (m − n) + y
− (m + n)

where
y + is
the right-going wave and y − is the left-going wave. It can be seen
from this representation that sampling the function y at a given point
m and time n merely involves summing two delayed copies of its
traveling waves. These traveling waves will reflect at boundaries such
as the suspension points of vibrating strings or the open or closed
ends of tubes. Hence the waves travel along closed loops.

Digital
waveguide models therefore comprise digital delay lines to represent
the geometry of the waveguide which are closed by recursion, digital
filters to represent the frequency-dependent losses and mild dispersion
in the medium, and often non-linear elements. Losses incurred
throughout the medium are generally consolidated so that they can be
calculated once at the termination of a delay line, rather than many
times throughout.

Waveguides
such as
acoustic tubes may be
thought of as three-dimensional, but because their lengths are often
much greater than their cross-sectional area, it is reasonable and
computationally efficient to model them as one dimensional waveguides.
Membranes, as used in drums, may be modeled using two-dimensional
waveguide meshes, and reverberation in three dimensional spaces may be
modeled using three-dimensional meshes. Vibraphone bars, bells, singing
bowls and other sounding solids (also called idiophones) can be modeled
by a related method called banded waveguides where multiple
band-limited digital waveguide elements are used to model the strongly
dispersive behavior of waves in solids.

The
term "Digital
Waveguide Synthesis" was coined by Julius O. Smith III who helped
develop it and eventually filed the patent. It represents an extension
of the Karplus-Strong algorithm. Stanford University owns the patent
rights for digital waveguide synthesis and signed an agreement in 1989
to develop the technology with Yamaha.

An
extension to DWG
synthesis of strings made by Smith is commuted synthesis, wherein the
excitation to the digital waveguide contains both string excitation and
the body response of the instrument. This is possible because the
digital waveguide is linear and makes it unnecessary to model the
instrument body's resonances after synthesizing the string output,
greatly reducing the number of computations required for a convincing
resynthesis.

Waveset Distortion

Generally
refers
to any process which irreversibly alters the wavesets in a sound,
waveset inversion, omission, reversal, shaking, shuffling,
substitution, averaging and harmonic distortion. Specifically used to
refer to power-distortion, raising each sample of the sound to a power
(e.g. squaring, cubing, taking the square root.). Simply put, to
exponentially increase a single parameter of a sound, thus distorting
one of it's features. A simple process that can be accomplished in a
number of ways through both analog and digital means, but the process
does yeild an entirely new wave form.