Tuesday, October 26, 2010

Linear Interpolation in digital audio came up recently, so I'm posting it here. I hope it's useful for other folks.

Technically, Linear interpolation is the act of fitting a line through existing points and computing new data from that line. This might sound complex, but it turns out to be pretty easy, and we can do it with a few lines of code.

Visually, we can think about drawing a line between two points, and then being able to find the y value for any given x. However, I actually think it's easier to think of it non-graphically because linear interpolation is really just a kind of weighted average.

For audio, we frequently want to use linear interpolation because it's easy to implement, computationally efficient, and it's "smooth" in some sense that I won't get into, but I will say that it generally does not create clicks and pops when you don't want it to. Linear interpolation is useful for handling fader changes and button-push "de-bouncing" and so on, and it's often great for simple cross-fades and the like.

The formula for linear interpolation is derived from the formula for the line between two points. You can see wikipedia for the details. I am omitting it here and jumping straight to an example. To perform a linear interpolation of 100 samples where y[0] = 7, and y[100] = 20, our code would look something like this:

You can think of ratio as the weight given to the start variable, and 1-ratio as the weight given to the end variable. As we slide through the samples, we slowly transition from the start value to the end value.

Notice I've been very careful to make sure y[0] is actually 7, and y[99] is not quite 20, so that y[100] will smoothly transition to 20 as required. Off-by-one errors can screw this up and while you might not hear the difference, you want to get that right or you could end up with pops, weird overs, or other subtle problems.

Now you might say that the above code is not very efficient. You can improve on it somewhat using the code below, but be aware that if you interpolating over a large number of samples, especially if you are using single precision floats, you might not quite end up where you expect. The performance gain for this more complex code is likely to be minimal on modern computer hardware, but may be substantial on DSP hardware, where operations like floating point adds take much less time than floating point divides. A clever compiler could theoretically make the same object code out of these two code snippets if it can determine that precision won't be an issue.

double start = 7;double end = 20;int length = 100;

double interval = ( end - start ) / length;y[0] = start;

for( int i=1; i y[i] = y[i-1] + interval ;

By the time we get to the end, y[length-1] should be ( end - start ) / length * length = ( end - start ) larger than the start, which is exactly right.

That's all there is to Linear Interpolation, so let's go to an audio example: Say we want to go from off, or mutted to on, or unmutted, without a click. Instead of 7 and 20, we'd use 0.0 for off, and 1.0 for on. Also, instead of setting the values in the array, we are going to be multiplying the values in the array, because that's how we do gain changes. Now, let's say we don't know what a good length of time is for unmutting, so lets just make that a variable. Below is a function that takes an array of mono samples, transitions them from off to on at a given time with the given interval. I haven't tested this exact code, but it should be good enough for illustrative purposes: