Wednesday, 13 August 2014

Pole Dancing

I have mentioned in previous posts that the frequency, phase, and impulse response of filters are inextricably tied together. Filter designers need to know exactly how and why these parameters are linked if they are going to be able to design effective filters, which means that a mathematical model for filters is required. Such models need to be able to describe both digital and analog filters equally well; indeed a fundamentally correct model should have precisely that property. But what does that have to do with pole dancing? Read on…

Some of the most challenging mathematical problems are routinely addressed by the trick of ‘transformation’. You re-state the problem by ‘transforming’ the data from one frame of reference to another. A transformation is any change made to a system such that everything in the transformed system corresponds to something unique in the original system and vice versa. You would do this because the problem, when expressed in terms of the new frame of reference, becomes soluble. The general goal is to find an appropriate transformation, one which expresses the problem in a form within which the solution can be identified. Take for example the problem of finding the cubic root of a number. Not easy to perform by itself. But if you can ‘transform’ the number to its logarithm, the cubic root is found simply by dividing the logarithm by three.

Most of the most challenging problems in mathematics are ultimately addressed by transformation. A little over 20 years ago, Fermat’s Last Theorem remained unproven. This theorem simply states that no three integers exist such that the cube of one of them is equal to the sum of the cubes of the the other two (and likewise for other powers higher than three). Although a simple problem that anyone can understand, it was ultimately solved by Andrew Wiles by employing the most fantastical transformation to express the problem in terms of a construct called “elliptic functions”, and solving the equivalent expression of the problem in elliptic function space, itself a gargantuan challenge. This is perhaps the most extreme example of a transformation, one which takes a concept which most laymen would have no trouble understanding, and renders it in a form accessible only to the most seriously skilled of experts.

At a simpler level, the Fourier Transform is an example understood by most audiophiles. By applying it to data representing a musical signal, we end up with a finely detailed representation of the spectral content of the music. This is information which is not readily apparent from inspection of the original music signal, and renders the music in such a form that we can analyze and manipulate its spectral content, which we could not do with the waveform alone. At the same time, the Fourier Transformed representation does not allow us to play the music, or inspect it for artifacts such as clipping or level setting.

Another problem for audiophiles is the design of filters. Filters are crucial to every aspect of audio reproduction. They are used in power supplies to help turn AC power into DC rail voltages. They are used to restrict gain stages to frequencies at which they don’t oscillate. They are used to prevent DC leakage from one circuit to another. They are used in loudspeaker crossovers to ensure drive units are asked to play only those frequencies for which they were designed. They are used on LP cutting lathes to suppress bass frequencies and enhance treble frequencies - and in phono preamplifiers to correct for it. And they are widely used in digital audio for a number of different purposes.

Filter design and analysis is surprisingly tricky, and this is where this post starts to get hairy. Modern filter theory makes use of a transformation called the z-Transform. This is closely related to the Fourier Transform (the Fourier transform is in fact a subset of the z-Transform). The z-Transform takes an audio signal and transforms it into a new representation on a two-dimensional surface called z-space. This 2-dimensionality arises because z is a complex number, and complex numbers have two components - a ‘real’ part and an ‘imaginary’ part. If you represent the ‘real’ part on an x-axis and the ‘imaginary’ part on a y-axis, then all values of z can be represented as points on the 2-dimensional x-y surface.

It can be quite difficult to get your head around the concept of z-space, but think of it as a representation of frequency plus some additional attributes. Having transformed your music into z-space with the z-Transform, the behaviour of any filter can then be conveniently described by a function, usually written as H(z) and referred to as the transfer function. If we multiply the value of the z-Transform at every point in z-space by the value of H(z) at that point in z-space, we get a modified z-Transform. If we were to apply the reverse (or inverse) z-Transform to this modified data we would end up with a modified audio signal - what we get is the result of passing the original signal through the filter. It may sound complicated, but it is a lot simpler (and more accurate) than any other general treatment. The bottom line is that the function H(z) is a complete description of the filter and can be used on its own to extract any information we want regarding the performance of the filter. The behaviour of H(z) has some unexpected benefits. If H(z) = 1/z then the result of that filter is simply to delay the audio signal by one sample. This has interesting implications for digital filter design.

The function H(z) can be absolutely anything you want it to be. However, having said that, there is nothing to prevent you from specifying a H(z) function which is unstable, has no useful purpose, or cannot be implemented. For all practical purposes, we are only really interested in transfer functions that are both stable and useful. Being useful means that we have to be able to implement it either as an analog or a digital filter, and the z-Transform methodology provides for both. The things that make a H(z) transfer function useful are its ‘Poles’ and ‘Zeros’. Poles are values of z for which H(z) is infinite (referred to as a singularity), and zeros are values of z for which H(z) is zero. Designing a filter then becomes a matter of placing poles and zeros in strategic positions in z-space. I have decided to call this ‘Pole Dancing’. Pole dancing requires great skill if your objectives are to be satisfied. It can take many forms, but the ones which have been proven over time to work best rely on certain specific steps, and you are best advised to stick with them. It is most effective when done by Pros, or at least by experienced amateurs.

Once you have danced your pole dance, it is then a relatively simple matter to use the poles and zeros you placed on your z-space to prepare the equation which describes H(z), and to re-arrange it into a convenient form. For digital filter design, the most convenient form is a polynomial in (1/z) divided by another polynomial in (1/z), in which case the coefficients of the two polynomials turn out to be the precise coefficients of the digital filter. The poles and zeros also translate into inductors and capacitors if you are designing an analog filter.

Once I have the transfer function H(z) nailed down, I can use it to calculate the frequency response, phase response, and impulse response of the filter by replacing z with its complex frequency representation. Recall that z is a complex number, having both ‘real’ and ‘imaginary’ parts. It therefore follows that H(z) itself is a complex function, and likewise has real and imaginary parts. The frequency response is obtained by evaluating the ‘magnitude’ of H(z), and the phase response is obtained by evaluating its ‘angle’. [The square of the magnitude is the sum of the squares of the ‘real’ and ‘imaginary’ parts; the angle is that whose TAN() is given by the ratio of the ‘imaginary’ to the ‘real’ parts]. Finally, the impulse response is obtained by the more complicated business of taking the inverse z-Transform of the entire H(z).

You can see why the phase, frequency and impulse responses are so intimately related. You have three functions, each described by only two variables - the ‘real’ and ‘imaginary’ parts of H(z). With three functions described by two variables, it is mathematically impossible to change any one without also changing at least one of the other two.

You also get a hint of where the different ‘types’ of filter come from. Many of you will have come across the terms Butterworth, Chebychev, Bessel, or Elliptic. These are classes of filter optimized for different specific attributes (I won’t go into those here). I mentioned that the transfer function H(z) can in principle be anything you want it to be. It turns out that each of those four filter types correspond to having their H(z) function belong to one of four clearly-defined forms (which I also won’t attempt to describe).

Finally, you may be thinking that you can get any arbitrary filter characteristic you desire by specifying exactly what you require in terms of any two of the frequency/phase/impulse responses, and work back to the H(x) function that corresponds to it. And you would be right - you can do that. But then you will find that your clever new H(x) is either unstable, or cannot be realized using any real-world digital or analog filter structures.