LPCs using the Covariance Method in Julia

Linear prediction coefficients (LPCs) are used a lot in signal processing, and the theory around it is covered in other places (see here for an intuitive explanation). In this post I will go over how to implement LPCs using the covariance method in Julia. Given a signal \(x\), the job of LPCs is to predict the current sample i.e. \(x(n)\) using a weighted sum of previous samples:

\[ x(n) = \sum_{i=1}^L a(i) x(n-i) + e(n)\]

Ideally, \(e(n)\), the error, will be zero but usually it is not. As a result we want values of \(a(i)\) that make \(e(n)\) as small as possible. The autocorrelation method resulted in the Yule-Walker equations, which have a nice structure (the resulting matrix is toeplitz). The covariance method results in a similar set of equations, but the resulting matrix is not toeplitz, so we can't use the Levinson-Durbin algorithm to solve it. This means we have to rely on good old matrix inversion. The covariance equations we have to solve look like this:

\[\sum_{i=1}^L \phi_{i,k} a(i) = -\phi_{k,0} \qquad 1 \leq k \leq L\]

where the \(\phi\)s are defined as:

\[ \phi_{i,k} = \dfrac{1}{N-L} \sum_{n=L}^{N-1} x(n-k)x(n-i)\]

where N is the length of the signal we are working with, and L is the order of the model we want. The matrix equation that results looks like this: