``Linear'' in this context means that the unknowns
are multiplied only by constants--they may not be multiplied by each
other or raised to any power other than
(e.g., not squared or cubed
or raised to the
power). Linear systems of
equations in
unknowns are very easy to solve compared to nonlinear systems
of
equations in
unknowns. For example, Matlab and Octave can
easily handle them. You learn all about this in a course on
Linear Algebra which is highly recommended for anyone
interested in getting involved with signal processing. Linear algebra
also teaches you all about matrices, which are introduced only
briefly in Appendix H.

That the rationals are dense in the reals
is easy to show using decimal expansions. Let
and
denote
any two distinct, positive, irrational real numbers. Since
and
are distinct, their decimal expansions must differ in some
digit, say the
th. Without loss of generality, assume
.
Form the rational number
by zeroing all digits in the decimal
expansion of
after the
th. Then
, as needed.
For two negative real numbers, we can negate them, use the same
argument, and negate the result. For one positive and one negative
real number, the rational number zero lies between them.

This was computed via N[Sqrt[2],60] in Mathematica.
Symbolic mathematics programs, such as Mathematica,
Maple (offered as a Matlab extension),
maxima (a free, GNU descendant of the original Macsyma,
written in Common Lisp, and available at
http://maxima.sourceforge.net),GiNaC (the Maple-replacement
used in the Octave Symbolic Manipulation Toolbox),
or
Yacas (another free, open-source program with similar goals
as Mathematica), are handy tools for cranking out any number
of digits in irrational numbers such as
. In
Yacas (as of Version 1.0.55), the syntax is

Precision(60)N(Sqrt(2))

Of course, symbolic math programs can do much more than this, such as
carrying out algebraic manipulations on polynomials and solving
systems of symbolic equations in closed form.

We will use the chain rule from
calculus without proof. Note that the use of calculus is beyond the
normal level of this book. Since calculus is only needed at this one
point in the DFT-math story, the reader should not be discouraged if
its usage seems like ``magic''. Calculus will not be needed at all for
practical applications of the DFT, such as spectrum analysis,
discussed in Chapter 8.

In
Mathematica, the first 50 digits of
may be computed by
the expression N[E,50] (``evaluate numerically the
reserved-constant E to 50 decimal places''). In the
Octave Symbolic Manipulation Toolbox (part of Octave Forge), one
may type ``digits(50); Exp(1)''.

A system
is said to be
linear if for any two input signals
and
, we have
. A system is said to be
time invariant if
implies
. This subject is developed in detail in
the second book [71] of the music signal processing series,
available on-line at
http://ccrma.stanford.edu/~jos/filters/.

Existence of the
Laurent expansion follows from the fact that the generating function is a
product of an exponential function,
, and an
exponential function inverted with respect to the unit circle,
. It is readily verified by direct differentiation
in the complex plane that the exponential is an
entire function of
(analytic at all finite points in the complex
plane) [16], and therefore the inverted exponential is analytic
everywhere except at
.
The desired Laurent expansion may be obtained, in principle,
by multiplying the one-sided series for the exponential and inverted exponential
together. The exponential series has the well known form
. The series for the inverted exponential
can be obtained by inverting again (
), obtaining the
appropriate exponential series, and inverting each term.

In complex variables, a function
is ``analytic'' at a point
if it is differentiable of
all orders at each point in some neighborhood of
[16]. Therefore, one might expect an ``analytic signal''
to be any signal which is differentiable of all orders at any point in
time, i.e., one that admits a fully valid Taylor expansion about any
point in time. However, all bandlimited signals (being sums of
finite-frequency sinusoids) are analytic in the complex-variables
sense at every point in time. Therefore, the signal processing term
``analytic signal'' refers instead to a signal having ``no negative
frequencies''. Equivalently, one could say that the spectrum of an analytic
signal is ``causal in the frequency domain''.

This operation is actually
used in some real-world AM and FM radio receivers (particularly in digital
radio receivers). The signal comes in centered about a high ``carrier
frequency'' (such as 101 MHz for radio station FM 101), so it looks very
much like a sinusoid at frequency 101 MHz. (The frequency modulation only
varies the carrier frequency in a relatively tiny interval about 101 MHz.
The total FM bandwidth including all the FM ``sidebands'' is about 100 kHz.
AM bands are only 10kHz wide.) By delaying the signal by 1/4 cycle, a good
approximation to the imaginary part of the analytic signal is created, and
its instantaneous amplitude and frequency are then simple to compute from
the analytic signal.

Demodulation is the
process of recovering the modulation signal. For amplitude modulation
(AM), the modulated signal is of the form
, where
is the ``carrier frequency'',
is the amplitude envelope (modulation),
is the
modulation signal we wish to recover (the audio signal being broadcast
in the case of AM radio), and
is the modulation index for AM.

In this section,
denotes a signal, while in the previous sections, we used an underline
(
) to emphasize the vector interpretation of a signal. One might
worry that it is now too easy to confuse signals (vectors) and scale
factors (scalars), but this is usually not the case: signal names are
generally taken from the end of the Roman alphabet (
),
while scalar symbols are chosen from the beginning of the Roman
(
) and Greek (
) alphabets. Also, formulas involving signals are typlically
specified on the sample level, so that signals are usually indexed
(
) or subscripted (
).

The energy of a
pressure wave is the integral over time and area of the squared pressure
divided by the wave impedance the wave is traveling in. The energy of
a velocity wave is the integral over time of the squared
velocity times the wave impedance. In audio work, a signal
is
typically a list of pressure samples derived from a microphone
signal, or it might be samples of force from a piezoelectric
transducer, velocity from a magnetic guitar pickup, and so on.
In all of these cases, the total physical energy associated with the
signal is proportional to the sum of squared signal samples. Physical
connections in signal processing are explored more fully in Book III
of the Music Signal Processing Series [72], (available
online).

For reasons beyond the scope of this book, when the
sample mean
is estimated as the average value of the same
samples used to compute the sample variance
, the sum
should be divided by
rather than
to avoid a bias
[34].

From some points of view, it is more elegant to
conjugate the first operand in the definition of the inner
product. However, for explaining the DFT, conjugating the second
operand is better. The former case arises when expressing inner
product
as a vector operation
, where
denotes the
Hermitian transpose of
the vector
. Either convention works out fine, but it is best to
choose one and stick with it forever.

Note that we have dropped the underbar notation for
signals/vectors such as
and
. While this is commonly
done, it is now possible to confuse vectors and scalars. The context
should keep everything clear. Also, symbols for scalars tend to be
chosen from the beginning of the alphabet (Roman or Greek), such as
, while symbols for vectors/signals are
normally chosen from letters near the end, such as
--all
of which we have seen up to now. In later sections, the underbar
notation will continue to be used when it seems to add clarity.

Note that, in this section,
denotes an entire
signal while
denotes the
th sample of that
signal. It would be clearer to use
, but the expressions below
would become messier. In other contexts, outside of this section,
might instead denote the
th signal
from a set of
signals.

Spectral leakage is essentially equivalent to (i.e., a
Fourier dual of) the Gibb's phenomenon for truncated Fourier
series expansions (see §B.3), which some of us studied in high
school. As more sinusoids are added to the expansion, the error
waveform increases in frequency, and decreases in signal energy, but
its peak value does not converge to zero. Instead, in the limit as
infinitely many sinusoids are added to the Fourier-series sum, the
peak error converges to an isolated point. Isolated points
have ``measure zero'' under integration, and therefore have no effect
on integrals such as the one which calculates Fourier-series
coefficients.

A spectrum is mathematically identical to a signal, since
both are just sequences of
complex numbers. However, for clarity, we
generally use ``signal'' when the sequence index is considered a time
index, and ``spectrum'' when the index is associated with successive
frequency samples.

To simulate acyclic convolution using cyclic
convolution, as is appropriate for the simulation of sampled
continuous-time systems, sufficient zero padding is used so
that nonzero samples do not ``wrap around'' as a result of the
shifting of
in the definition of convolution. Zero padding is
discussed later in this
chapter (§7.2.7).

Normally, in practice, a
first-order recursive filter would be used to provide such an
exponential impulse response very efficiently in hardware or software
(see Book II [71] of this series for details). However, the impulse
response of any linear, time-invariant filter can be recorded and used
for implementation via convolution. The only catch is that recursive
filters generally have infinitely long impulse responses (true
exponential decays). Therefore, it is necessary to truncate the impulse
response when it decays enough that the remainder is negligible.

Similarly, zero padding in the
frequency domain gives what we may call ``periodic interpolation'' in the
time domain which is exact in the DFT case only for periodic signals
having a time-domain period equal to the DFT length. (See §6.7.)

You might wonder why we need this
since all indexing in
is defined modulo
already. The answer
is that
formally expresses a mapping from the space
of length
signals to the space
of length
signals. The
operator could alternatively be defined
as the scaling operator
,
, where
is any positive integer (note the increase in signal
length from
to
samples). Such a definition would better
suggest the related continuous-timeFourier theorems regarding
time/frequency scaling (see Appendix C, especially
§C.2).

You might be
concerned about what it means when
has an
imaginary part. This happens when the force
drives a
so-called ``reactive'' load such as a mass or spring. For the
details, see Book III [72] or any text on classical network
theory or circuit theory. When the driving force
works against
a real resistance
, then
(see §F.3), so
that the power is
, and in the frequency
domain,
so that the spectral power is
proportional to
. Thus,
in the case of a real driving-point impedance
, the spectral power
is always real and nonnegative.

While a length
DFT requires
approximately
arithmetic operations, a Cooley-Tukey FFT requires
closer to
operations when
is a power of 2, where
denotes the log-base-2 of
. Appendix A provides an
introduction to Cooley-Tukey FFT algorithms in §A.1.

For present purposes, the expected value
may be found by averaging an infinite number of
cross-correlations
computed using different segments of
and
. Both
and
must be infinitely long, of course, and
all stationary processes
are infinitely long. Otherwise, their statistics could not be
time invariant.

To clarify, we are using the word ``sample'' with
two different meanings. In addition to the usual meaning wherein a
continuous time or frequency axis is made discrete, a statistical
``sample'' refers to a set of observations from some presumed random
process. Estimated statistics based on such a statistical sample are
then called ``sample statistics'', such as the sample mean, sample
variance, and so on.

By the convolution theorem dual, windowing
in the time domain is convolution (smoothing) in the frequency domain
(§7.4.6). Since a triangle is the convolution of a
rectangle with itself, its transform is
sinc
in the
continuous-time case (cf. Appendix D). In the discrete-time
case, it is proportional to
sinc
.

In this context, ``highly composite'' means ``a
product of many prime factors.'' For example, the number
is highly composite since it is a power of 2. The
number
is also composite, but it
requires prime factors other than 2. Prime numbers
are not composite at all.

Obtaining an exact integer number of samples per
period can be arranged using pitch detection and
resampling of the periodic signal. A time-varying pitch
requires time-varying resampling [75]--see
Appendix D. However, when a signal is resampled for this purpose,
one can generally choose a power of 2 for the number of samples per
period.

This result is well known in the field of
image processing. The DCT performs almost as well as the optimal
Karhunen-Loève Transform (KLT) when analyzing certain Gaussian
stochastic processes as the transform size goes to infinity. (In the
KLT, the basis functions are taken to be the eigenvectors of the
autocorrelationmatrix of the input signal block. As a result, the
transform coefficients are decorrelated in the KLT, leading to
maximum energy concentration and optimal coding gain.) However, the
DFT provides a similar degree of optimality for large block sizes
.
For practical spectral analysis and processing of audio signals, there
is typically no reason to prefer the DCT over the DFT.

The Heisenberg uncertainty principle in quantum physics
applies to any dual properties of a particle. For example, the
position and velocity of an electron are oft-cited as such duals. An
electron is described, in quantum mechanics, by a probability wave
packet. Therefore, the position of an electron in space can be
defined as the midpoint of the amplitude envelope of its wave
function; its velocity, on the other hand, is determined by the
frequency of the wave packet. To accurately measure the
frequency, the packet must be very long in space, to provide many
cycles of oscillation under the envelope. But this means the location
in space is relatively uncertain. In more precise mathematical terms,
the probability wave function for velocity is proportional to the
spatial Fourier transform of the probability wave for position. I.e.,
they are exact Fourier duals. The HeisenbergUncertainty Principle is
therefore a Fourier property of fundamental particles described by
waves [20]. This of course includes all matter and energy in the
Universe.

More typically, each sample represents the
instantaneous velocity of the speaker. Here's why: Most
microphones are transducers from acoustic pressure to
electrical voltage, and analog-to-digital converters (ADCs)
produce numerical samples which are proportional to voltage. Thus,
digital samples are normally proportional to acoustic pressure
deviation (force per unit area on the microphone, with ambient air
pressure subtracted out). When digital samples are converted to
analog form by digital-to-analog conversion (DAC), each sample is
converted to an electrical voltage which then drives a loudspeaker (in
audio applications). Typical loudspeakers use a ``voice-coil'' to
convert applied voltage to electromotive force on the speaker which
applies pressure on the air via the speaker cone. Since the acoustic
impedance of air is a real number, wave pressure is directly
proportional wave velocity. Since the speaker must move in
contact with the air during wave generation, we may conclude that
digital signal samples correspond most closely to the velocity
of the speaker, not its position. The situation is further
complicated somewhat by the fact that typical speakers do not
themselves have a real driving-point impedance.
However, for an ``ideal'' microphone and speaker, we should get
samples proportional to speaker velocity and hence to air pressure.
Well below resonance, the real part of the radiation impedance
of the pushed air
should dominate, as long as the excursion does not exceed the linear
interval of cone displacement.

Mathematically,
can be allowed to be nonzero over points
provided that the set of all such points have measure zero in
the sense of Lebesgue integration. However, such distinctions do not
arise for practical signals which are always finite in extent and
which therefore have continuous Fourier transforms. This is why we
specialize the sampling theorem to the case of continuous-spectrum
signals.

One joke along these lines, due, I'm
told, to Professor Bracewell at Stanford, is that ``since the
telephone is bandlimited to 3kHz, and since bandlimited signals cannot
be time limited, it follows that one cannot hang up the telephone''.

Intensity is
physically power per unit area. Bels may also be defined in
terms of energy, or power which is energy per unit time.
Since sound is always measured over some
area by a microphone diaphragm, its physical power is conventionally
normalized by area, giving intensity. Similarly, the force applied
by sound to a microphone diaphragm is normalized by area to give
pressure (force per unit area).

The bar was originally defined as one ``atmosphere'' (atmospheric pressure at sea level),
but now a microbar is defined to be exactly one
dynecm
,
where a dyne is the amount of force required to accelerate a gram by one centimeter
per second squared.

See
http://en.wikipedia.org/wiki/A-weighting
for more information, including a plot of the A weighting curve (as
well as B, C, and D weightings which can be used for louder listening
levels) and pointers to relevant standards.

Companders (compressor-expanders) essentially
``turn down'' the signal gain when it is ``loud'' and ``turn up'' the
gain when it is ``quiet''. As long as the input-output curve is
monotonic (such as a log characteristic), the dynamic-range
compression can be undone (expanded).

Computers use bits, as opposed to the more
familiar decimal digits, because they are more convenient to implement
in digital hardware. For example, the decimal numbers 0, 1, 2,
3, 4, 5 become, in binary format, 0, 1, 10, 11, 100, 101. Each
bit position in binary notation corresponds to a power of 2,
e.g.,
; while each
digit position in decimal notation corresponds to a power of
10, e.g.,
. The term
``digit'' comes from the same word meaning ``finger.'' Since
we have ten fingers (digits), the term ``digit'' technically should be
associated only with decimal notation, but in practice it is used for
others as well. Other popular number systems in computers include
octal which is base 8 (rarely seen any more, but still
specifiable in any C/C++ program by using a leading zero, e.g.,
decimal = 111,101,101
binary), and hexadecimal (or simply ``hex'') which is
base 16 and which employs the letters A through F to yield 16 digits
(specifiable in C/C++ by starting the number with ``0x'', e.g., 0x1ED
=
decimal =
1,1110,1101 binary). Note, however, that the representation within
the computer is still always binary; octal and hex are simply
convenient groupings of bits into sets of three bits (octal)
or four bits (hex).

Normally, quantization error
is computed as
, where
is the signal being
quantized, and
is the quantized value, obtained by
rounding to the nearest representable amplitude. Filtered error
feedback uses instead the formula
,
where
denotes a filtering operation which ``shapes'' the
quantization noise spectrum. An excellent article on the use of
round-off error feedback in audio digital filters is
[18].

Another term commonly heard for ``significand''
is ``mantissa.'' However, this use of the term ``mantissa'' is not the same
as its previous definition as the fractional part of a logarithm. We will
therefore use only the term ``significand'' to avoid confusion.

By choosing
the bias equal to half the numerical dynamic range of
(thus effectively
inverting the sign bit of the exponent), it becomes easier to compare two
floating-point numbers in hardware: the entire floating-point word can be
treated by the hardware as one giant integer for numerical comparison
purposes. This works because negative exponents correspond to
floating-point numbers less than 1 in magnitude, while positive exponents
correspond to floating-point numbers greater than 1 in magnitude.

The
first by Gray and Davisson is available free online. The second by
Papoulis is a classic textbook. The two volumes by Kay provide
perhaps the most comprehensive coverage of the field. The volumes by
Sharf and Kailath represent material used for many years in the
authors' respective graduate level courses in statistical signal
processing. All of the cited authors are well known researchers and
professors in the field. It should also perhaps be noted that Book IV
[73] in the music signal processing book series (of which this
is Book I) contains a fair amount of introductory material in this area.

Alternatively, it can be extended to the complex case by
writing
, so that
includes a conjugation of the elements of
. This
difficulty arises from the fact that matrix multiplication is really
defined without consideration of conjugation or transposition at all,
making it unwieldy to express in terms of inner products in the complex
case, even though that is perhaps the most fundamental interpretation
of a matrix multiply.

For consistency with sum and other
functions, it would be better if
length() returned the number of elements along dimension
1, with the special case of using dimension 2 (``along rows'') for
row-vectors. However, compatibility with early Matlab dictates
the convention used.

yields orthogonal projection onto the column-space of
(when
the
matrix
is invertible, and here
denotes the Hermitian (conjugating) transpose of
). That is, if
is the
projection of
onto
, we must have, by definition of
orthogonal projection (§5.9.9), that
lies in the
column-space of
, and that
, or
for any
.
We may say that
projects onto the
orthogonal complement of the column-space of
.

That
is a linear combination of the columns of
is
immediate because
is the leftmost term of the definition of
in Eq.
(I.1). To show orthogonality of the ``projection
error'', i.e., that
for all
, note that in matrix notation we must show
for all
, which
requires
, or
. Since
, it is Hermitian symmetric, so
that the orthogonal projection requirement becomes
, which is easily verified for
as
defined in Eq.
(I.1).

The general property
defines an idempotent
square matrix
. Intuitively, it makes sense that a
projection should be idempotent, because once a vector is projected
onto a particular subspace, projecting again should do nothing. All
idempotent matrices are projection matrices, and vice versa. However,
only Hermitian (symmetric) idempotent matrices correspond to orthogonal
projection [48].