­ the necessary frequency response is not known beforehand ­ the necessary frequency response varies with time An adaptive filter is a digital filter that automatically changes its characteristics (e.g. frequency response) by optimizing the internal parameters In wireless communications, adaptive signal processing is used in many ways, for example: ­ Channel equalization ­ Channel estimation ­ Voice coding ­ Interference cancelling

Coefficient vector w(k) is updated on each iteration The output signal y(k) is an estimate of d(k) ______________________________________________________________________________________________________________________________________

Adaptive Signal Processing Matias With Page 6

How to choose desired signal ­ example 1

System identification:

­ Desired signal is the output of the unknown system ­ When the output MSE is minimized, the adaptive filter represents a model for the unknown system

Channel equalization ­ Desired signal is a delayed version of the original signal: » Initially training signal » Later received data (feedback after decision) ­ When the output MSE is minimized, adaptive filter represents an inverse model (equalizer) of the channel

­ Desired signal: signal x(k) corrupted by noise n1(k) ­ Input signal: another noise signal n2(k), correlates with n1(k) ­ Error signal e(k) contains that part of the desired signal that does not correlate with the input signal, i.e. an enhanced version of x(k)

An optimal weight vector wo for the MSE objective function is called the (FIR) Wiener filter. It minimizes the MSE between y(k) and d(k). Solving the Wiener filter wo is easy: ­ The MSE objective function for a filter with fixed coefficients w:

F [e(k )] = E e 2 (k ) = E d 2 (k ) - 2w T p + w T Rw

[

] [

]

­ To minimize, we need the gradient vector gw:

gw = F [e(k )] = -2p + 2Rw w

­ Set the gradient vector to zero, solve for w:

w o = R -1p

R=E[x(k) xT(k)] is the input signal correlation matrix P=E[d(k) xT(k)] is the cross-correlation between the desired and input signals

Idea: Approach the Wiener solution by searching in the opposite direction of the gradient vector gw ­ &quot;steepest-descent&quot; based algorithm ­ Step-size µ is used to control the rate of convergence Remember, the gradient of the MSE function is given as

Idea: adjust the LMS step size on each iteration Normalized LMS (NLMS): the step size µ is normalized by the energy of the data vector: µ NLMS = T x x + ­ 0 &lt; &lt; 2 is a fixed convergence factor. Controls misadjustment and convergence speed. ­ is a small number used avoid very large step sizes Effects of the normalization ­ Makes the algorithm independent of signal scaling ­ Converges usually much faster than the LMS ­ Computational complexity slightly higher than LMS

­ The matrix inversion lemma is used to update the inverse of R, resulting a lower computational complexity ______________________________________________________________________________________________________________________________________

Adaptive Signal Processing Matias With Page 20

Properties of the RLS algorithm

Provides a (weighted) least-squares solution. In other words, finds the minimum of the WLS objective function. Fast convergence

­ by an order of magnitude faster than LMS

High computational complexity

­ Basic version: O[N2] ­ When the input consists of delayed versions of the same signal (such as our x), the computational complexity can drop to O[N] ­ In the fast O[N] algorithms, the weight vector w is not typically available, i.e. you only get e(k). A problem in certain applications (e.g. system identification)

FQR-RLS is an example of fast RLS algorithms with computational complexity relative to O[N]. In adaptive noise cancellation, fast RLS algorithms can be used since we don't need to know the coefficient vector w