4.2.1 Periodic functions and motivation

As motivation for studying Fourier series, suppose we have the problem

\[ x'' + \omega^2_0 x=f(t), \]

for some periodic function \( f(t)\). We have already solved

\[ x''+ \omega^2_0 x=F_0 \cos( \omega t). \]

One way to solve (4.2.1) is to decompose \(f(t)\) as a sum of cosines (and sines) and then solve many problems of the form (4.2.2). We then use the principle of superposition, to sum up all the solutions we got to get a solution to (4.2.1).

Before we proceed, let us talk a little bit more in detail about periodic functions. A function is said to be periodic with period \(P\) if \( f(t)\) for all \(t\). For brevity we will say \( f(t)\) is \(P-\)periodic. Note that a \(P-\)periodic function is also \(2P-\)periodic, \(3P-\)periodic and so on. For example, \( \cos(t)\) and \( \sin(t)\) are \( 2 \pi -\)periodic. So are \( \cos(kt)\) and \( \sin(kt)\) for all integers \( k\). The constant functions are an extreme example. They are periodic for any period (exercise).

Normally we will start with a function \(f(t)\) defined on some interval \( [-L, L]\) and we will want to extend periodically to make it a \( 2L-\)periodic function. We do this extension by defining a new function \(F(t)\) such that for \(t\) in\( [-L, L]\), \( F(t)=f(t)\). For \(t\) in \( [L, 3L]\), we define \( F(t)=f(t-2L)\), for \(t\) in \( [-3L, -L]\), \( F(t)=f(t+2L)\), and so on. We assumed that \( f(-L)=f(L)\). We could have also started with \(f\) defined only on the half-open interval \( (-L, L]\) and then define \( f(-L)=f(L)\).

Example \(\PageIndex{1}\):

Define \( f(t)=1-t^2\) on \([-1, 1]\). Now extend periodically to a 2-periodic function. See Figure 4.2.

Figure 4.2:Periodic extension of the function \( 1-t^2\).

You should be careful to distinguish between \( f(t)\) and its extension. A common mistake is to assume that a formula for \( f(t)\) holds for its extension. It can be confusing when the formula for \( f(t)\) is periodic, but with perhaps a different period.

Exercise \(\PageIndex{1}\):

Define \( f(t)= \cos t\) on \([\dfrac{ - \pi}{2}, \dfrac{ \pi}{2} ]\). Take the \(\pi -\)periodic extension and sketch itsgraph. How does it compare to the graph of \( \cos t\)?

4.2.2 Inner product and eigenvector decomposition

Suppose we have a symmetric matrix, that is \( A^T=A\). We have said before that the eigenvectors of \( A\) are then orthogonal. Here the word orthogonal means that if \( \vec{v}\) and \( \vec{w}\) are two distinct (and not multiples of each other) eigenvectors of \( A\), then \( \left \langle \vec{v}, \vec{w} \right \rangle=0\). In this case the inner product \(\left \langle \vec{v}, \vec{w} \right \rangle\) is the dot product, which can be computed as \( \vec{v}^T \vec{w}\).

To decompose a vector \( \vec{v}\) in terms of mutually orthogonal vectors \( \vec{w}_1\) and \( \vec{w}_2\) we write

4.2.3 The Trigonometric Series

Instead of decomposing a vector in terms of eigenvectors of a matrix, we will decompose a function in terms of eigenfunctions of a certain eigenvalue problem. The eigenvalue problem we will use for the Fourier series is

\[ x'' + \lambda x=0,~~~~ x(- \pi)=x(\pi)~~~~x'(- \pi)=x'( \pi).\]

We have previously computed that the eigenfunctions are \(1, \cos(kt), \sin(kt)\). That is, we will want to find a representation of a \( 2 \pi -\)periodic function \( f(t)\) as

This series is called the Fourier series2 or the trigonometric series for \(f(t)\). We write the coefficient of the eigenfunction \(1\) as \( \dfrac{a_0}{2}\) for convenience. We could also think of \( 1= \cos(0t)\), so that we only need to look at \( \cos(kt)\) and \( \sin(kt)\).

As for matrices we want to find a projection of \(f(t)\) onto the subspace generated by the eigenfunctions. So we will want to define an inner product of functions. For example, to find \( a_n\) we want to compute \( \left \langle f(t), \cos(nt) \right \rangle \). We define the inner product as

With this definition of the inner product, we have seen in the previous section that the eigenfunctions \( \cos(kt)\)(including the constant eigenfunction), and \( \sin(kt)\) are orthogonal in the sense that

for \(t\) in \((- \pi, \pi]\). Extend \( f(t)\) periodically and write it as a Fourier series. This function is called the sawtooth.

Figure 4.3:The graph of the sawtooth function.

The plot of the extended periodic function is given in Figure 4.3. Let us compute the coefficients.

Solution

We start with \(a_0\),

\[ a_0 = \frac{1}{\pi} \int^\pi_{-\pi} tdt=0.\]

We will often use the result from calculus that says that the integral of an odd function over a symmetric interval is zero. Recall that an odd function is a function \( \varphi(t)\) such that \( \varphi(-t) = - \varphi(t)\). For example the functions \( t, \sin t\), or (importantly for us) \( t \cos(nt)\) are all odd functions. Thus

\[ a_n=\frac{1}{\pi} \int^\pi_{-\pi} t \cos(nt)dt=0.\]

Let us move to \( b_n\). Another useful fact from calculus is that the integral of an even function over a symmetric interval is twice the integral of the same function over half the interval. Recall an even function is a function \(\varphi(t)\) such that \( \varphi(-t) = \varphi(t)\). For example \( t \sin(nt)\) is even.

is only an equality for such \(t\) where \(f(t)\) is continuous. That is, we do not get an equality for \(t= - \pi, 0, \pi\) and all the other discontinuities of \(f(t)\). It is not hard to see that when \(t\) is an integer multiple of \(\pi\) (which includes all the discontinuities), then

and extend periodically. The series equals this extended \(f(t)\) everywhere, including the discontinuities. We will generally not worry about changing the function values at several (finitely many) points.

We will say more about convergence in the next section. Let us however mention briefly an effect of the discontinuity. Let us zoom in near the discontinuity in the square wave. Further, let us plot the first 100 harmonics, see Figure 4.7. You will notice that while the series is a very good approximation away from the discontinuities, the error (the overshoot) near the discontinuity at \( t= \pi\) does not seem to be getting any smaller. This behavior is known as the Gibbs phenomenon. The region where the error is large does get smaller, however, the more terms in the series we take.

Figure 4.7:Gibbs phenomenon in action.

We can think of a periodic function as a “signal” being a superposition of many signals of pure frequency. For example, we could think of the square wave as a tone of certain base frequency. It will be, in fact, a superposition of many different pure tones of frequencies that are multiples of the base frequency. On the other hand a simple sine wave is only the pure tone. The simplest way to make sound using a computer is the square wave, and the sound is very different from nice pure tones. If you have played video games from the 1980s or so, then you have heard what square waves sound like.

Recommended articles

The LibreTexts libraries are Powered by MindTouch®and are supported by the Department of Education Open Textbook Pilot Project, the UC Davis Office of the Provost, the UC Davis Library, the California State University Affordable Learning Solutions Program, and Merlot. We also acknowledge previous National Science Foundation support under grant numbers 1246120, 1525057, and 1413739. Unless otherwise noted, LibreTexts content is licensed by CC BY-NC-SA 3.0. Have questions or comments? For more information contact us at info@libretexts.org or check out our status page at https://status.libretexts.org.