for arbitrary functions f (x) and g(x), that is, where the discrete inner product is the (N + 1)-point
Gaussian quadrature approximation to the integral inner product.

PROOF: The product of two polynomials, each of degree at most N , is of degree at most
2N . Gaussian quadrature with (N + 1) points, however, is exact for polynomials of degree
(2N + 1) or less. Therefore, Gaussian quadrature evaluates without error the integrals that
express the mutual orthogonality of the ´¬ürst (N + 1) basis functions among themselves.
Q.E.D.
4.4. PSEUDOSPECTRAL IS GALERKIN METHOD VIA QUADRATURE 91

then the property of exponential convergence implies that for large N , ET (N ; x) will be
very small. Precisely how small depends upon f (x) and N , of course, but the virtue of
exponential convergence is that it is not a great deal more dif´¬ücult to obtain 15 decimal
places of accuracy than it is to get 1% accuracy. (For ln(1 + x) on x Ôłł [0, 1], for example,
N = 19 gives a relative error of at worst 1 part in 1015 !) Theorem 17 implies that

an, G Ôëí (f, ¤ćn )G (4.32)

must evaluate to

(4.33)
an, G = an + (ET , ¤ćn )G

In words, the discrete inner product (that is, Gaussian integration) correctly and exactly
integrates that part of (f, ¤ć) which is due to the truncated series portion of f , and all the
error in the integration comes from the truncation error in the (N + 1)-term approxima-
tion. If ET Ôëł 1/1015 , it is obvious that the difference between an, G and the exact spectral
coef´¬ücient an must be of this same ridiculously small magnitude. (The exceptions are the
coef´¬ücients whose degree n is close to the truncation limit N . The absolute error in ap-
proximating these tiny coef´¬ücients by quadrature is small, but the relative error may not
be small.) However, Gaussian integration is a well-conditioned numerical procedure, so
the absolute errors in the {an } will always be small for all n. It follows that if we blindly
calculate (N + 1) spectral coef´¬ücients via (N + 1)-point Gaussian quadrature and then sum
to calculate f (x), we will always get very high accuracy for f (x) if N is suf´¬üciently large.
Thus, to represent a known function f (x) as a spectral series, we can safely dump our
table of integrals and simply multiply the grid point values of f (x) by a square matrix:

a Ôëł Mf (4.34)

where a is the column vector of spectral coef´¬ücients and where the elements of the other
two matrices are

Mij Ôëí ¤ći (xj ) wj fj Ôëí f (xj ) (4.35)
;

where the {xj } are the (N + 1) Gaussian quadrature abscissas ÔÇ” the roots of ¤ćN +1 (x) ÔÇ”
and the wj are the corresponding quadrature weights. Note that if the basis functions are
not orthonormal, we should divide the i-th row of M by (¤ći , ¤ći ). This transformation from
the grid point values f (xj ) to the corresponding spectral coef´¬ücients aj is discussed further
in Chapter 5, Sec. 5, and Chapter 10, Sec. 4, as the ÔÇťMatrix Multiplication TransformationÔÇŁ
(MMT).
When solving a differential equation, we merely apply the same procedure as in (4.34)
and (4.35) to approximate the residual function R(x; a0 , a1 , . . . , aN ) by interpolation. Again,
if N is large enough so that the coef´¬ücients rn in the residual series are small for n > N , the
error in using Gaussian quadrature instead of exact integration to compute the rn must be
small.
Although we have referred to ÔÇťpolynomialsÔÇŁ throughout this section, Theorems 16 and
17 not only apply to all the standard types of orthogonal polynomials but to trigonometric
CHAPTER 4. INTERPOLATION, COLLOCATION & ALL THAT
92

polynomials (that is, truncated Fourier series) as well. For most types of polynomials, it
is dif´¬ücult to estimate the error in (4.34) except to conclude, as we have already done, that
if N is large, this error will be absurdly small. Numerical experience has con´¬ürmed that
the pseudospectral method is a very accurate and robust procedure for Hermite functions,
Jacobi polynomials and so on.
For Fourier series and Chebyshev polynomials, however, it is possible to analyze the
error in approximating coef´¬ücient integrals by Gaussian quadrature (4.34) in a simple way.
This ÔÇťaliasingÔÇŁ analysis turns out to be essential in formulating the famous ÔÇťTwo-Thirds
RuleÔÇŁ for creating ÔÇťun-aliasedÔÇŁ pseudospectral codes for nonlinear ´¬‚uid mechanics prob-
lems, so it is the topic of the next section.
We began this chapter with a discussion of interpolation and then turned to inner prod-
ucts and numerical integration. The next theorem shows that these two themes are inti-
mately connected.
Theorem 18 (INTERPOLATION BY QUADRATURE)
Let PN (x) denote that polynomial of degree N which interpolates to a function f (x) at the
(N + 1) Gaussian quadrature points associated with a set of orthogonal polynomials {¤ćn (x)}:
(4.36)
PN (xi ) = f (xi ) i = 0, 1, . . . , N
PN (x) may be expanded without error as the sum of the ´¬ürst (N + 1) ¤ćN (x) because it is merely a
polynomial of degree N . The coef´¬ücients {an } of this expansion
N
(4.37)
PN (x) = an ¤ćn (x)
n=0

are given WITHOUT APPROXIMATION by the discrete inner product
(f, ¤ćn )G
(4.38)
an =
(¤ćn , ¤ćn )G
that is to say, are precisely the coef´¬ücients calculated by (4.34) and (4.35) above, the Matrix Multi-
plication Transform (MMT).
PROOF: Since the interpolating polynomial is a polynomial, and since Theorem 18 shows
that the discrete inner product preserves the orthogonality of the ´¬ürst (N + 1) basis func-
tions among themselves, it is obvious that applying the discrete inner product to the ´¬ünite
series (4.37) will exactly retrieve the coef´¬ücients an . What is not obvious is that we will
compute the same coef´¬ücients when we use f (x) itself in the inner product since f (x) is
not a polynomial, but a function that can be represented only by an in´¬ünite series.
However, the Gaussian integration uses only the (N + 1) values of f (x) at the interpola-
tion points ÔÇ” and these are the same as the values of PN (x) at those points. Thus, when we
use Gaussian quadrature to approximate f (x), we are really expanding the interpolating
polynomial PN (x) instead. Q. E. D.
Since it is easy to sum a truncated spectral series like (4.37) by recurrence, it is far more
ef´¬ücient to perform Lagrangian interpolation by calculating the coef´¬ücients as in (4.34) and
(4.35) than it is to use the cardinal function representation (4.6), even though the two are
mathematically identical (ignoring round-off error).
One must be a bit careful to understand just what the theorem means. The coef´¬ücients
computed by Gaussian integration are not the exact spectral coef´¬ücients of f (x), but only
good approximations. The pseudospectral coef´¬ücients are the exact expansion coef´¬ücients
only of PN (x), the interpolating polynomial. For large N , however, such subtleties are
academic: PN (x) is a ridiculously good approximation to f (x), and therefore its coef´¬ücients
are exceedingly good approximations to those of f (x).
4.5. PSEUDOSPECTRAL ERRORS 93

PROOF: Young and Gregory (1972).
The factor of (1/2) multiplying both a0 and ╬±0 is a convention. The reason for it is that
(cos[nx], cos[nx]) = (sin[nx], sin[nx]) = ¤Ç for any n > 0, but (1, 1) = 2 ¤Ç. There are two
ways of dealing with this factor of 2. One, which is normal in working with a Fourier series,
is to simply insert a denominator of (1/2¤Ç) in front of the integral that gives the constant
in the Fourier series and a factor of (1/¤Ç) in front of all the other coef´¬ücient integrals. The
alternative, which is used in the theorem, is to employ a factor of 1/¤Ç in the de´¬ünition of all
coef´¬ücients ÔÇ” which means computing a coef´¬ücient a0 which is a factor of two larger than
the constant in the Fourier series ÔÇ” and then inserting the compensating factor of (1/2)
directly into the Fourier sum (4.40) or (4.41).
This second convention is quite popular with the pseudospectral crowd because it elim-
inates factors of 2 from (4.44) as well as from (4.43).
CHAPTER 4. INTERPOLATION, COLLOCATION & ALL THAT
94

Eq. (4.43) is labelled the ÔÇťTrapezoidal RuleÔÇŁ is because it is equivalent to applying that
simple integration formula to the Fourier coef´¬ücient integrals. 1 The Trapezoidal Rule is a
very crude approximation with a relative accuracy of only O(h2 ) for general, that is to say,
for non-periodic functions. For periodic f (x), however, the Trapezoidal Rule is equivalent to
Gaussian integration. Unlike the optimum quadrature methods associated with Legendre
or Hermite polynomials, it is not necessary to look up the weights and abscissas in a table.
The weights (with the convention of the factor of (1/2) in (4.40) and (4.41)) are wk Ôëí 2/N for
all k, and the interpolation points are evenly spaced as in (4.39). (Parenthetically, note that
in computing Fourier transforms, the Trapezoidal Rule also gives accuracy that increases
exponentially with N for suf´¬üciently nice functions.)
Most of Theorem 19 merely repeats the trigonometric equivalent of the interpolation-
at-the-Gaussian-abscissas ideas of the previous section, but (4.44) is something remarkably
different. Theorems 17 and 18 imply that for any set of orthogonal polynomials,
Ôł×
(4.45)
an = ╬±n + en,j (N ) ╬±j
j=N +1

where the {╬±n } are the exact spectral coef´¬ücients and where the {an , n = 0, 1, . . . , N } are
the coef´¬ücients of the interpolating polynomial. What is surprising about (4.44) is that it
shows that for Fourier series (and through change-of-variable, Chebyshev polynomials),
only one ej out of each group of (N/2) is different from 0.
This simple fact turns out to be profoundly important for coping with ÔÇťnonlinear alias-
ing instabilityÔÇŁ, which is a vice of both ´¬ünite difference and pseudospectral hydrodynamic
codes. For pseudospectral algorithms and quadratic nonlinearity, aliasing can be cured by
evaluating the nonlinear products using 2N interpolation points [double the number of
terms in the series] so that the expansion of the nonlinear term can be computed exactly.
S. A. Orszag pointed out that this is wasteful for Fourier and Chebyshev methods. The
special form of the Gaussian quadrature error in (4.44) makes it possible to use only about
(3/2) N points instead of 2N . This has become known as the ÔÇť3/2ÔÇ™s RuleÔÇŁ or ÔÇťTwo-Thirds
RuleÔÇŁ where the latter name re´¬‚ects that the yield of coef´¬ücients is only (2/3) the number of
points used for quadrature. Unfortunately, the Two-Thirds Rule applies only for these two
types of basis functions: Fourier and Chebyshev. We will discuss the ÔÇťTwo-Thirds RuleÔÇŁ in
some detail in Chapter 11.
Another payoff of (4.44) is a proof of the following.

and f (¤Ç) by one-half the weight used for the interior points. Because of the periodicity, however, f (Ôł’¤Ç) Ôëí f (¤Ç),
so it suf´¬üces to weight all the points the same, and use only f (¤Ç).
4.5. PSEUDOSPECTRAL ERRORS 95