Do you want to interpolate to a higher sampling frequency (still discrete-time) or to any arbitrary time value (continuous time)? In the first case, a digital filter can be used as part of the interpolation. For continuous time, you need an algebraic expression for arbitrary time values.
–
JuanchoDec 31 '12 at 15:38

You are suggesting linear interpolation. Is this the case? Normally, in signal processing, band-limited interpolation is preferred. What are your application requirements?
–
JuanchoDec 31 '12 at 15:39

@Juancho Thanks for your interests in the question. In fact I am now working on image interpolation. In several papers I found the authors always link interpolation with FIR, which confused me as I cannot find the link between them. As I read the answers below, I somewhat got the point.
–
feelfreeJan 2 '13 at 17:29

4 Answers
4

If the signal is properly sampled, i.e. in accordance with the Shannon/Nyquist criterion, then the samples contain all information about the original signal. If not, all bets are off, so we'll skip this for now.

Interpolation is then equivalent to sampling the signal at a non-integer time. In your case you want to know what is x(t=2.1) by using the information x[1], x[2], x[3] ...

This can be interpreted as the convolution with an impulse response. However, it's not "finite".

Unfortunately The Whitaker Shannon interpolation is not very practical: The impulse response is infinite, the number of zeros is infinite and it's falloff with time is slow, so it's not a great candidate for windowing to get a time limited impulse response.

The Whittaker Shannon interpolation is equivalent to convolution with the impulse response of an ideal low pass filter. All practical interpolation methods will also involve a low pass filter. Even linear interpolation can be interpreted as a low pass filter, it's just a very bad one.

Interpolation filters are made "practical" by adjusting filter parameters to the requirements of the application: how much precision do you need, what's the spectral content of the signal, what latency can you tolerate, etc.

Linear interpolation is suboptimal, as you may know. You understand it in time-domain, but let's look into it in frequency-domain. The sampled signal spectrum would be periodic with period $\omega=2\pi$ (f=1).

Ideally, we could use an ideal low pass filter with cutoff frequency at $f_S/2$ Don't forget that, although not in the picture, negative frequencies are defined.

You may recall that an ideal frequency response means a sinc kind of time-domain impulse response. That means we would need infinite filter coefficients: every sample needs to know about every past and future sample. So why not try something simpler?

That would be, for example, linear interpolation. The linear time-domain response you use resembles a triangular signal, which could also be seen as a sinc in frequency-domain

$H(j\omega)=1 / T · (sin (\omega T /2)/ (\omega/2))^2$

(please check Oppenheim's Signals and systems, section 7.2 for good graphs and a really deep and awesome explanation, you won't have trouble to find it in the net. Discrete time signal processing also includes this matter.)
Really inaccurately, this filter would look like:

Thus, our filter is worse than the ideal (obviously), but now we have a finite time-domain response (a.k.a. FIR filter). This is the way that the time-domain operations you were doing relate to frequency-domain.

For more info concerning how to apply the filter, I advise to check the mentioned books. If you define a filter in time-domain (h[n]) you can apply it using convolution, while if defined in frequency-domain (H(jw)) you can compute the output spectrum as the product of the input and the freq. response.

The output of a FIR filter is an average computed over the filter length and centered at a location based on the weights, or coefficients, of the filter. A simple "box car" moving average, where all weights are the same, will average all of the inputs equally and so the weighted average will be at the center of the length of the filter. If the filter has an odd number of taps, the output will be the average of the samples around the center tap. If the filter has an even number of taps, then the average will fall between the two center samples and the filter will "interpolate" the average of the covered samples in between the two center samples.

In general a frequency-selective filter will have a "weight" profile shaped like a $\frac{\sin(x)}{x}$ or something similar, and the output sample will be aligned in time with the peak of the main lobe of the $\frac{\sin(x)}{x}$. If the $\frac{\sin(x)}{x}$ is symmetric with an even number of taps, the peak of the main lobe will be between input sample and, again, the filter will interpolate a value between the two input samples at the peak of the main lobe.

In many filters the sampling of the sinx/x function is done specifically to "interpolate" an output sample at some arbitrary location between input samples by placing the peak of the $\frac{\sin(x)}{x}$ at the desired interpolation point. Polyphase resampling filters and Farrow filters do this by selecting the FIR filter coefficients dynamically so that the $\frac{\sin(x)}{x}$ peak falls at a particular desired location between input samples.

If you have your FIR filter in terms of an closed-form equations or computable algorithm, such as with, say, a windowed Sinc using a von Hann window, then for each point you wish to interpolate, you can compute the vector of coefficients (taps) required by your FIR filter for the offset from your data vector points, and use the dot product of these coefficients and your data array to compute the FIR filtered interpolated result point.

If the offsets of the points you wish to interpolate are repeated, such as when doing a sequence of interpolations stepping at a rational fraction step interval, you can pre-compute or cache the coefficients, sometimes called a polyphase FIR filter.

Instead of computing each coefficient, you might be able to get sufficient quality results by interpolating your interpolation coefficients, such as when interpolating a coarser polyphase FIR filter table, or you may be able to polynomial approximate the coefficients using something like the Farrow filter algorithm. This potentially allow a lower computational cost than the possible need to call a transcendental library function multiple times for each tap for each point (or worse if your desired filter can't be expressed in simple easily computable closed form).