Abstract

In this paper a general theory for interpolation methods on a rectangular grid is introduced. By the use of this theory an efficient B-spline based interpolation method for spectral codes is presented. The theory links the order of the interpolation method with its spectral properties. In this way many properties like order of continuity, order of convergence and magnitude of errors can be explained. Furthermore, a fast implementation of the interpolation methods is given. We show that the B-spline based interpolation method has several advantages compared to other methods. First, the order of continuity of the interpolated field is higher than for other methods. Second, only one FFT is needed whereas e.g. Hermite interpolation needs multiple FFTs for computing the derivatives. Third, the interpolation error almost matches the one of Hermite interpolation, a property not reached by other methods investigated.

I

{AMS}

65T40, 65D05

nterpolation, B-spline, three-dimensional, Hermite, Fourier, spline

1 Introduction

In recent years many studies on the dynamics of inertial particles in turbulence have focussed on the Lagrangian properties, see the review by Toschi and Bodenschatz [1]. For particles in turbulence, but also in many other applications in fluid mechanics, interpolation methods play a crucial role as fluid velocities, rate of strain and other flow quantities are generally not available at the location of the particles, while these quantities are needed for the integration of the equations of motion of the particles.

When a particle is small, spherical and rigid its dynamics in non-uniform flow is governed by the Maxey-Riley (MR) equation [2]. An elaborate overview of the different terms in the MR equation and their numerical implementation can be found in the paper by Loth [3] and a historical account was given in a review article by Michaelides [4]. The evaluation of the hydrodynamic force exerted on the particles requires knowledge of the fluid velocity, its time derivative and gradients at the particle positions and turns out to be rather elaborate. First, the Basset history force is computationally very expensive. However, a significant reduction of cpu-time can be obtained by fitting the diffusive kernel of the Basset history force with exponential functions, as recently shown by Van Hinsberg et al.[5]. Second, the interpolation step itself can be very time consuming and memory demanding. Especially for light particles, which have a mass density similar to the fluid density (which is, for example, sediment transport in estuaries and phytoplankton in oceans and lakes), most terms in the Maxey-Riley equation cannot be ignored and therefore also the first derivatives of the fluid velocity are needed [6]. For this reason simulations of light particles are computationally expensive while simulations of heavy particles are less expensive. In order to achieve convergence of the statistical properties (probability distribution functions, correlation functions, multi-particle statistics, particle distributions) many particles are needed and this calls for fast and accurate interpolation methods. Therefore, our aim is to reduce the computation time for the evaluation of the trajectories of light particles substantially and make the algorithm competitive with the fast algorithms for the computation of trajectories of heavy particles in turbulence.

The incompressible Navier-Stokes equations are used to describe the turbulent flow field. In turbulence studies the Navier-Stokes equations are often solved by means of dealiased pseudo-spectral codes because of the advantage of exponential convergence of the computed flow variables. Therefore, we will focus here on interpolation methods for spectral codes.

There are many interpolation methods available [7]. We are interested in those interpolation methods which are characterized by the following properties. First, the method must be accurate, thus we need a high order of convergence. Second, the interpolant must have a high order of continuity Cp, with p the order of continuity. Third, the method must be fast, i.e. computationally inexpensive. A very simple interpolation method is linear interpolation. This method is very fast, but unfortunately this method is relatively inaccurate and it has a low order of convergence. High order of convergence can be reached by employing Lagrange interpolation [8]. This interpolation method has the drawback that it still has a low order of continuity for the interpolant. A solution for this problem was recently found by Lalescu et al.[9] who proposed a new spline interpolation method. Here, the interpolant has a higher order of continuity, but the order of convergence has decreased. A method that has both a high order of convergence and a high order of continuity is Hermite interpolation [10]. The major disadvantage of this method is that also the derivatives of the function to be interpolated are needed, these are calculated by additional Fast Fourier Transforms (FFTs) making this method computationally expensive. A remedy to this is B-spline interpolarion [11], which has a high order of convergence and errors comparable with the ones of Hermite interpolation. Furthermore, this method has a higher order of continuity compared with the other methods mentioned above. Normally, the transformation to the B-spline basis is an expensive step, but by making use of the spectral code it can be executed in Fourier space which makes it inexpensive. By executing this step in Fourier space the method can be optimized, resulting in smaller errors. We believe that the proposed combination of B-spline interpolation with a spectral code makes the method favorable over other traditional interpolation methods.

Besides exploring the advantages of B-spline interpolation we focus on necessary conditions allowing general 3D-interpolations to be efficiently executed as successive 1D-interpolations. These conditions also carry over desired properties (like order of convergence and order of continuity) from the 1D-interpolation to the three-dimensional equivalent. Further, we provide a fast, generic algorithm to interpolate the function and its derivatives using successive 1D-interpolations.

We provide expressions for the interpolation errors in terms of the Fourier components. For this we use Fourier analysis where the interpolation method is represented as a convolution function. By doing this, the errors can be calculated as a function of the wave number. This gives insight in the behavior of interpolation, especially which components are dominant in the interpolation.

The present study may also be useful for many other applications. Examples include the computation of charged particles in a magnetic field [12, 13], but also digital filtering and applications in medical imaging [7, 14]. In the latter case interpolations are used to improve the resolution of images. Many efforts have been taken to find interpolation methods with optimum qualities [7]. Still, it is a very active field of research. Besides the optimization of interpolation algorithms (accuracy, efficiency), the impact of different interpolation methods on physical phenomena like particle transport has been investigated in many studies [15, 16, 17].

In Section 2 we introduce the general framework and explain some one-dimen-sional interpolation methods. In Section 3 the framework is generalized to three-dimensional interpolation, and a generic algorithm is proposed for the implementation of the interpolation in Section 4. A Fourier analysis of the interpolation operator is discussed in Section 5. In Section 6 the Fourier analysis is extended to Hermite interpolation and a proof of minimal errors is given. In Section 7 our B-spline based interpolation method is introduced, and is compared with three other methods (including Hermite interpolation) in Section 8. Finally, concluding remarks are given in Section 9.

2 Interpolation methods

We present a general framework to discuss the various interpolation methods. The goal of any interpolation method is to reconstruct the original function as closely as possible. As in many applications also some derivatives of the original function are needed, we focus on reconstructing them as well. We start with one-dimensional (1D) interpolation and subsequently, in Section 3, we generalize our framework to the three-dimensional (3D) case.

Let u(x) be a 1D function that needs to be reconstructed with x∈[0,1]. In practice we only have the values of u on a uniform grid, with grid spacing Δx and knots at positions xj, with 0≤j≤(Δx)−1. After interpolation, the function ˜u is obtained which is an approximation of u. Now let I be the interpolation operator, so ˜u=I[u].

When u has periodic boundary conditions, it can be expressed in a Fourier series as follows

u(x)=∑k∈Z^ukϕk(x),ϕk(x)=e2πikx,

(1)

with i the imaginary unit and k the wave number. As the grid spacing is finite, a finite number of Fourier modes can be represented by the grid. From now on we consider u to have a finite number of Fourier modes, so

u(x)=kmax∑k=−kmax^ukϕk(x),

(2)

where kmax, related to Δx, is the maximum wave number. As we add a finite number of continuously differentiable Fourier modes ϕk we have u∈C∞(0,1), a property which can be used when constructing the interpolation method. In principle u could be reconstructed at any point by the use of Fourier series, however in practice this would be far too time consuming and it is therefore not done, instead an interpolation is performed. ˜ϕk is defined as the interpolant of ϕk, i.e.,
˜ϕk=I[ϕk].
We restrict ourselves to linear interpolation operators, i.e., I[α1u1+α2u2]=α1I[u1]+α2I[u2] with α1,α2∈C. This property can be used to write ˜u(x) as

˜u(x)=kmax∑k=−kmax^uk˜ϕk(x).

(3)

We focus on reconstructing u by piecewise polynomial functions of degree N−1. For each interval (xj,xj+1) with 0≤j<(Δx)−1 we have

Here, the vector a depends on the interval under consideration and aT denotes the transpose of a. The degree of the highest polynomial function for which the interpolation is still exact is denoted by n. In this way we get the restriction n≤N−1. We consider the reconstruction of u between the two neighboring grid points xj and xj+1. Without loss of generality we can translate and rescale x so that the interval [xj,xj+1] becomes the unit interval [0,1].

For Hermite interpolation the values of ˜u and of its derivatives, up to the order N/2−1 (N must be even), must coincide with those of u at x=0 and x=1, i.e.,

dl˜udxl(0)=dludxl(0),dl˜udxl(1)=dludxl(1),l=0,1,..,N2−1.

(5)

If the derivatives are known then n=N−1. When the derivatives are not known exactly on the grid they have to be approximated by finite difference methods, as done by Lalescu et al.[9]. Unfortunately, this method is less accurate than Hermite interpolation and n=N−2.

The general framework will be illustrated with cubic Hermite interpolation for which N=4. So the interpolation uses the function value and the first derivative in the two neighboring grid points to construct the interpolation polynomial. We have chosen this method because it is very accurate. Moreover, the second derivative, which is a piecewise linear function, gives minimal errors with respect to the L2-norm. This property is further discussed in Section 6.

First, the discrete values of u and possible derivatives which are given on the grid, are indicated with the vector b. In general we have

b=f[u],

(6)

where the linear operator f depends on the interpolation method and maps a function onto a N-vector.
Second, the coefficients ai of the monomial basis need to be computed. Because I and f are linear operators, we can write without loss of generality,

aT=bTM.

(7)

Here, M is the matrix that defines the interpolation method. For illustration, f and M for cubic Hermite interpolation, are given by

In many applications also derivatives are needed. In order to compute the l-th derivative of ˜u, the monomial basis functions should be differentiated l times. In general this can be done by multiplying a by the differentiation matrix Dl times, so

where a(l) contains the coefficients for the l-th derivative, obtaining

dl˜udxl(x)=a(l)T¯x=bTMDl¯x=bTM(l)¯x,

(11)

with M(l)=MDl. Note that the matrix D is nilpotent, since Dl=0 for l≥N, implying that at most N−1 derivatives can be approximated.

In conclusion, we presented a framework that is able to describe interpolation methods, which can be used to implement the interpolation methods in a straightforward way. In Section 4 it is used to generate fast algorithms for the implementation of the method.

3 3D interpolation

In this section the 1D interpolation methods of Section 2 are extended to the 3D case. Now the scalar field u depends on the vector x and a 3D interpolation needs to be carried out. Like before the interpolated field is given by ˜u and I3 is the 3D equivalent of I, so ˜u=I3[u]. The 3D field u can be represented by a 3D Fourier series like

u=∑k^ukϕk(%x),

(12)

where ϕk is given by

ϕk(x)=e2πik⋅x=ϕkx(x)ϕky(y)ϕkz(z),k=(kx,ky,kz),x=(x,y,z),

(13)

and ϕk defined by (1). Again we restrict ourselves to linear interpolation operators, therefore ˜u can be written as

˜u(x)=∑k^uk˜ϕk(x),

(14)

with ˜ϕk the interpolant of ϕk, i.e., ˜ϕk=I3[ϕk].

Figure 1: Graphical description of the 3D Lagrange interpolation, using three steps of 1D interpolations for the case N=4. First, N2 1D interpolations are carried out in the x-direction (crosses). Second, N interpolations are carried out in y-direction (dots in the right figure) and from these N results finally one interpolated value is derived in z-direction (triangle).

The 3D interpolation for a scalar field is carried out applying three times 1D interpolations, see Fig. 1. The interpolation consists of three steps, in which the three spatial directions are interpolated one after each other. The order in which the spatial directions are interpolated does not matter. Building the 3D interpolation out of 1D interpolations saves computing time. It can be done for all interpolation methods as long as the following two conditions are met. First, the operator I3 must be linear. Second, the following condition must be satisfied

˜ϕk≡I3[ϕk]=I3[ϕkxϕkyϕkz]=I[ϕkx]I[ϕky]I[ϕkz]=˜ϕkx˜ϕky˜ϕkz,

(15)

which is the case for almost all interpolation methods. Property (15) can be used to prove that properties of the 1D operator I carry over to the 3D operator I3, for example, the order of convergence and the order of continuity.

Next, relations (9) and (11) are extended to the 3D case. Like before, we start with storing some values of u (given by the spectral code) and possible derivatives in the third-order tensor B. In the same fashion as relation (6) one gets

B=fz[fy[fx[u]]],

(16)

where one element of tensor B is defined like

Bi1i2i3=fz[fy[fx[u]i1]i2]i3.

(17)

fx, fy and fz are similar to operator f but now working in a specified direction. For Hermite interpolation they are given by

The interpolation is carried out in a similar way as sketched in Fig. 1. Similarly to (9), ˜u(x) can be represented as

I3[u](x)=˜u(x)=%
B¯×1(M¯x)¯×2(M¯y)¯×3(M¯z),

(19)

where M is still the matrix for 1D interpolation, ¯y and ¯z are defined like ¯x which is given by relation (4). Further, ¯×n denotes the n-mode vector product [18], like

A=B¯×nf,Ai1⋯in−1in+1⋯iN=∑inBi1⋯iNfin,

(20)

where N denotes the order of tensor B. In this way tensor A is one order less than tensor B. Because we employ three of these n-mode vector products the third-order tensor B reduces to a scalar. Furthermore, each of these n-mode vector products corresponds to an interpolation in one direction, see also Fig. 1. For a general derivative one gets

∂i+j+k˜u∂xi∂yj∂zk(x)=B¯×1(M(i)¯x)¯×2(M(j)¯y)¯×3(M(k)¯z).

(21)

Note that the matrix M does not necessarily have to be the same for the different directions x, y and z. One could choose different interpolation methods when for example Chebyshev polynomials are used in one direction. In this case the grid is nonuniform in this direction and therefore not all interpolation methods can be used.

Finally, when the scalar field u(x) becomes a vector field u(x), the three components of u can be interpolated separately. This can be written in short by a fourth order tensor B where the last dimension contains the three components of u. In this way the equations for the new tensor B remain the same as given above.

4 Implementation

Computed variables

Number of flops

Number of flops

for N=4

¯x, ¯y and ¯z

3N

12

M¯x, M¯y and M¯z

3N2

48

M(1)¯x, M(1)¯y and M(1)¯z

3N(N−1)

36

B¯×1(M¯x)

3N3

192

B¯×1(M(1)¯x)

3N3

192

B¯×1(M¯x)¯×2(%M¯y)

3N2

48

B¯×1(M¯x)¯×2(M(1)¯y)

3N2

48

B¯×1(M(1)¯x)¯×2(M¯y)

3N2

48

B¯×1(M¯x)¯×2(%M¯y)¯×3(M¯z)

3N

12

B¯×1(M¯x)¯×2(%M¯y)¯×3(M(1)¯z)

3N

12

B¯×1(M¯x)¯×2(M(1)¯y)¯×3(M¯%z)

3N

12

B¯×1(M(1)¯x)¯×2(M¯y)¯×3(M¯%z)

3N

12

Total:

6N3+15N2+12N

672

Table 1: Algorithm for interpolation, with an estimate of the computational costs

Relations (19) and (21) provide a good starting point for an efficient implementation of the interpolation. We focus on interpolating a 3D vector field u(x) and on calculating all its first derivatives (which are needed in many applications like the computation of the trajectories of inertial particles). The matrices M and M(1) only need to be computed once, which can be done prior to interpolation. Second, the vectors ¯x, ¯y and ¯z have to be computed which only needs to be done once for each position of interpolation. In Table 1 we keep track of all the computed quantities. Here, the computational costs for evaluating all the components is shown where one flop denotes one multiplication with one addition. We show the number of flops for the general case and for N=4. The main idea is to reduce the order of the tensors as soon as possible in order to generate an efficient method.

In order to determine how efficient the algorithm is, one can compare the computational costs against a lower bound. The lower bound we use is related to the size of B which is 3N3 for a vector field u. In order to be able to use all the information in tensor B, 3N3 flops are needed. For large N one finds that the algorithm of Table 1 is only a factor 2 less efficient than this lower bound.

We also compare our algorithm with the one proposed by Lekien and Marsden [19], which uses Hermite interpolation with N=4. Our algorithm has less restrictions and shows a slightly better computational performance (for N=4). The algorithm of Lekien and Marsden consists of two steps. First, they calculate the coefficients for the polynomial basis. Second, the values at the desired location are calculated. They claim that their method is beneficial when the derivatives are needed or the interpolation needs to be done multiple times for the same interval, because only the second step needs to be executed multiple times. Our method does not have the first step, therefore it has no restrictions, nevertheless the computation of the values and the first derivatives is slightly faster than for Lekien and Marsden, even when considering only the second step. The total costs of their second step is bounded by 12N3 flops (4 times 3N3 flops, for the computation of the values and the first derivatives). From Table 1 we can conclude that our method needs less flops for the same computations.

5 Fourier analysis

In this section the interpolation operator I is expressed in terms of a convolution. In this way properties of the interpolation method like the order of continuity of the interpolated field and the magnitude of the errors can be shown in the Fourier domain. We start with the interpolation of 1D functions and subsequently, it can be extended to the 3D case.

Figure 2: Sketch of linear interpolation as a convolution. The pins represent delta functions with the height equal to its prefactor. On the left side is a visualization in real space and on the right side in Fourier space.

Before we start with the derivation, we rescale the variable x by dividing it by Δx, so that the new grid spacing equals unity. From now on we work with the rescaled grid where x∈[0,m] and m=(Δx)−1, so xj=j for 0≤j≤m. Furthermore we introduce the dimensionless wave number κ=kΔx and ϕκ is similarly defined as ϕk, see (1). For Hermite interpolation the derivation is somewhat more complex because also the derivatives are used and therefore it is postponed to Section 6. We focus on interpolation methods where f[u] contains the values of u at the N nearest grid points xj of x with local ordering. Thus bj=u(xj) and xj is given by

xj

=

⌊x−N2+j⌋,j=1,2,⋯,N,

(22)

where ⌊⋅⌋ denotes the nearest lower integer. The interpolation methods can be described by the matrix M, with elements Mj,i, see relation (9). This relation can also be written as

Relation (23) can be rewritten by using the sifting property of the delta function, like

˜u(x)

=

N∑j=1Cj(x−xj)∫∞−∞u(y)δ(y−xj)dy.

(25)

This can be further reformulated by subtracting the argument of the delta function from the argument of Cj, as

˜u(x)

=

∫∞−∞u(y)N∑j=1δ(y−xj)Cj(x−y)dy

(26)

=

∫∞−∞u(y)D(y)C(x−y)dy,

with C(x) and D(x) given by

C(x)=N∑j=1Cj(x),D(x)=∑i∈Zδ(x−i).

(27)

In relation (26) the delta functions can be replaced by the function D, which is a train of delta functions because the functions Cj only have a support of length unity, see (24). Finally, the interpolation can be written like

˜u=(uD)∗C,

(28)

with ∗ denoting the convolution product. Here, the convolution function C depends on the interpolation matrix M, see Fig. 2.

As a consequence of relation (28), if the function C is continuous up to the p-th derivative then ˜u is also continuous up to the p-th derivative. Even stronger, the order of continuity of the function C is equal to the order of continuity of ˜u. Furthermore, by the use of relation (28) exact interpolation can be constructed as well 111Exact interpolation can be accomplished by F[C](k)=1 for −0.5≤k≤0.5 and zero elsewhere. In this way only the original Fourier component is filtered out of the spectrum. Note that in this case C has infinite support..

In the following of this section we will discuss the interpolation error. Before proceeding we need to proof the following theorem.

Theorem.⟨eκ,eλ⟩2=0 for κ≠λ. Here eκ is the error in mode κ, eκ=˜ϕκ−ϕκ and ⟨⋅⟩2 is the inner product related to the L2-norm ∥⋅∥2 defined by

Second, we take the Fourier transform of ˜ϕκ, for some fixed κ0, i.e.,

F[˜ϕκ0](k)

=

F[(ϕκ0D)∗C](k)=(F[ϕκ0]∗F[D])(k)F[C](k)

(31)

=

m∑i∈Zδ(k−(i+κ0))F[C](i+κ0),

which results in a train of delta functions with the perfector given by F[C], see Fig. 2, and F[⋅] denotes the Fourier transform given by

F[g](k):=∫∞−∞g(x)e−2πikxdx.

(32)

For linear interpolation these functions are shown in Fig. 2. Next, ⟨eκ,eλ⟩2 can be written as ⟨eκ,eλ⟩2=⟨˜ϕκ,˜ϕλ⟩2−⟨˜ϕκ,ϕλ⟩2−⟨ϕκ,˜ϕλ⟩2+⟨ϕκ,ϕλ⟩2. Trivially ⟨ϕκ,ϕλ⟩2=0 for κ≠λ. Furthermore, ˜ϕκ consists of a discrete set of Fourier components, see relation (31). Using this relation, one can show that no common Fourier components exist for ˜ϕκ and ˜ϕλ or ϕλ for κ≠λ. Therefore ⟨˜ϕκ,˜ϕλ⟩2=0, ⟨˜ϕκ,ϕλ⟩2=0 and ⟨ϕκ,˜ϕλ⟩2=0 for κ≠λ implying ⟨eκ,eλ⟩2=0 as claimed.

Corollary. The orthogonality is important to estimate errors. When the error in u is computed as ∥˜u−u∥22, it can be rewritten like ∥˜u−u∥22=∑κ^u2κ∥eκ∥22, which allows easy and straightforward computation of the errors.

Next, the error in one Fourier component is calculated. In this derivation we make use of the fact that ˜ϕκ can be written by a sum of Fourier components, see Fig. 2 and relation (31). The relative error in one Fourier component can be written as

∥∥˜ϕκ−ϕκ∥∥22∥ϕκ∥22=∥eκ∥22m

=

1m∥∥
∥∥−ϕκ+∑i∈ZF[C](κ+i)ϕκ+i∥∥
∥∥22

(33)

=

(F[C](κ)−1)2+∑i≠0(F[C](κ+i))2.

From this expression one can see that the error can be computed directly from F[C]. The same can be done for the error in the l-th derivative; e(l)κ=˜ϕ(l)κ−ϕ(l)κ. The idea is to take the derivatives of the individual Fourier components which results in

∥∥e(l)κ∥∥22∥∥ϕ(l)κ∥∥22=(F[C](κ)−1)2+∑i≠0(κ+iκ)2l(F[C](κ+i))2.

(34)

The extension to the 3D case is rather straightforward and is therefore not reported here. The basic idea is to create 3D functions by multiplying the 1D components, this can be done for all functions and the basic equations remain the same.

6 Hermite interpolation

In this section we extend the theory of Section 5 to Hermite interpolation. We also show some special properties that hold for Hermite interpolations. Especially, we examine the case N=4. For this case the second derivative becomes a piecewise linear function. Comparison with the actual second derivative shows that this piecewise linear function is optimal with respect to the L2-norm.

In order to extend the theory of Section 5 to Hermite interpolation with even N the same procedure is followed as in Section 5. Analogous to (23), ˜u(x) can be written as

˜u(x)

=

1∑j=0N/2∑l=1Cj,l(x−xj)dl−1udxl−1(xj),

(35)

where Cj,l and xj are given by

Cj,l(x−j)

=

{∑Ni=1Ml+jN2,ixi−1for0≤x<10elsewhere,l∈1,2,⋯,N2,

xj

=

⌊x⌋+j,j∈0,1.

(36)

Again following similar steps as in Section 5, ˜u(x) can be rewritten as

˜u(x)

=

1∑j=0N/2∑l=1Cj,l(x−xj)∫∞−∞dl−1udxl−1(y)δ(y−xj)dy

(37)

=

N/2∑l=1∫∞−∞dl−1udxl−1(y)D(y)Cl(x−y)dy,

where D is given by relation (27) and Cl is given by Cl(x)=C0,l(x)+C1,l(x). In short, ˜u can be written as

˜u=N/2∑l=1(dl−1udxl−1D)∗Cl,

(38)

similar to relation (28). Here one can see that for Hermite interpolation multiple convolution functions Cl are needed which correspond to the derivatives and the function itself. Replacing u by ϕκ in (38) gives

˜ϕκ=I[ϕκ]=(ϕκD)∗N/2∑l=1(2πiκ)l−1Cl.

(39)

In this way we find a similar expression as relation (30), where C has to be replaced by ∑N/2l=1(2πiκ)l−1Cl. In conclusion, relation (33) and (34) can still be used.

Property.
For the error in the first derivative we have the following property:

⟨e(1),1⟩2=0,

(40)

where the inner product ⟨⋅,⋅⟩2 is defined on the unit interval, i.e.,

⟨f,g⟩2=∫10f(x)g∗(x)%dx.

(41)

Furthermore the error in the l-th derivative, e(l), is given by

e(l)=dl˜udxl(x)−dludxl(x).

(42)

Proof. One can rewrite, part of the interpolation conditions for Hermite interpolation (5) in the following way

˜u(1)−˜u(0)=u(1)−u(0)⇔∫10d˜udxdx=∫10dudxdx.

(43)

Here two interpolation conditions give one new condition which is equivalent to relation (40).

Corollary. Property (40) shows that the error in the first derivative does not have a constant component. Therefore the constant component is exact with respect to the L2-norm

Property. For the error in the second derivative in case of N=4 we have

(44)

Proof. One can rewrite the interpolation conditions (5) for N=4 in the following way

˜u(1)−˜u(0)−˜u′(0)=u(1)−u(0)−u′(0)

⇔

∫10∫α0d2˜udx2dxdα=∫10∫α0d2udx2dxdα,

˜u′(1)−˜u′(0)=u′(1)−u′(0)

⇔

∫10d2˜udx2dx=∫10d2udx2dx.

(45)

The first relation in (44) follows immediately from the second condition in (45). The second relation in (44) is derived in the following way

Here, the first step is integration by parts and the second step uses the second relation of equation (45).

Corollary. Relation (45) implies that e(2) does not have a constant component, neither a linear component. For N=4 the second derivative is a linear function, and this means that there is no better approximation in the L2-norm of this second derivative with a piecewise linear function. This makes Hermite interpolation very interesting as a reference case, because we now have proven that the error is minimal for this case.

7 B-spline interpolation

In this section we start with explaining B-spline interpolation. The idea is to create an as smooth as possible interpolant. Later it is shown how the pseudo-spectral code can be used to efficiently execute this interpolation method. Furthermore, the interpolation method is optimized to create small errors in the L2-norm. We start with giving the B-spline convolution functions after which their matrix representation is given and finally the transformation to the B-spline basis functions is derived.

In a spectral code FFTs are applied to transform data from real space to Fourier space and backwards. These FFTs are the most expensive step in the simulation and therefore we want to keep the number of FFTs needed minimal. This is the reason why Hermite interpolation is not a good option, since extra FFTs are needed for the computation of the derivatives. An alternative is B-spline interpolation.

Figure 3: First four uniform B-splines functions.

We require high order of continuity of the interpolant. The highest order of continuity that can be obtained for the interpolant with piecewise polynomial functions of degree N−1 is CN−2. In this way the interpolant still matches the original function u(x) at the grid points xj. Moreover, one can immediately see that n=N−1, where n is the highest degree of a polynomial test function for which the interpolation is still exact. This high level of continuity can be achieved by using B-spline functions [11]. The first four uniform B-spline basis functions B(N) are shown in Fig. 3. These functions can be generated by means of convolutions in the following way

B(1)(x)

=

{1for−0.5≤x<0.5,0elsewhere,

B(2)

=

B(1)∗B(1),

B(3)

=

B(2)∗B(1),

⋮

B(N)

=

B(N−1)∗B(1).

(47)

These functions have the property that the N-th function is of degree N−1 and is CN−2. Furthermore, the B-spline basis functions have local support of length N. The B-spline functions can be seen as convolution functions C introduced in Section 5 and have a matrix representation. The relation between the functions B(N) and the matrix representation is similar to relation (24) and (27), and is given by

B(N)(x)

=

N∑j=1B(N),j(x),

B(N),j(x+N2−j)

=

{∑Ni=1M(N),i,jxi−1for0≤x<1,0elsewhere.

(48)

The matrix representation for the first four B-spline functions is as follows [20]

We still need to express u(x),x∈Z in terms of B-spline basis functions and thus find the transform from real space to the B-spline basis. Because the inverse transform from the B-spline basis to real space is somewhat easier, we start with this transformation first. From now on we omit the subindex (N). The coefficients of the B-spline basis are called uB(x), and u(x) can be derived from it by the discrete convolution ∗D in the following way, u=uB∗DBD.
Here, BD is given by