A fast computational method is given for the Fourier transform of the polyharmonic B-spline autocorrelation sequence in d dimensions. The approximation error is exponentially decaying with the number of terms taken into account. The algorithm improves speed upon a simple truncated-sum approach. Moreover, it is virtually independent of the spline's order. The autocorrelation filter directly serves for various tasks related to polyharmonic splines, such as interpolation, orthonormalization, and wavelet basis design.

Optical imaging techniques offer powerful solutions to capture brain networks processing in animals, especially when activity is distributed in functionally distinct spatial domains. Despite the progress in imaging techniques, the standard analysis procedures and statistical assessments for this type of data are still limited. In this paper, we perform two in vivo non-invasive optical recording techniques in the mouse olfactory bulb, using a genetically expressed activity reporter fluorescent protein (synaptopHfluorin) and intrinsic signals of the brain. For both imaging techniques, we show that the odour-triggered signals can be accurately parameterized using linear models. Fitting the models allows us to extract odour specific signals with a reduced level of noise compared to standard methods. In addition, the models serve to evaluate statistical significance, using a wavelet-based framework that exploits spatial correlation at different scales. We propose an extension of this framework to extract activation patterns at specific wavelet scales. This method is especially interesting to detect the odour inputs that segregate on the olfactory bulb in small spherical structures called glomeruli. Interestingly, with proper selection of wavelet scales, we can isolate significantly activated glomeruli and thus determine the odour map in an automated manner. Comparison against manual detection of glomeruli shows the high accuracy of the proposed method. Therefore, beyond the advantageous alternative to the existing treatments of optical imaging signals in general, our framework propose an interesting procedure to dissect brain activation patterns on multiple scales with statistical control.

We consider the problem of sampling piecewise sinusoidal signals. Classical sampling theory does not enable perfect reconstruction of such signals since they are not band-limited. However, they can be characterized by a finite number of parameters, namely, the frequency, amplitude, and phase of the sinusoids and the location of the discontinuities. In this paper, we show that under certain hypotheses on the sampling kernel, it is possible to perfectly recover the parameters that define the piecewise sinusoidal signal from its sampled version. In particular, we show that, at least theoretically, it is possible to recover piecewise sine waves with arbitrarily high frequencies and arbitrarily close switching points. Extensions of the method are also presented such as the recovery of combinations of piecewise sine waves and polynomials. Finally, we study the effect of noise and present a robust reconstruction algorithm that is stable down to SNR levels of 7 [dB].

We provide a method for constructing regular sampling lattices in arbitrary dimensions together with an integer dilation matrix. Subsampling using this dilation matrix leads to a similarity-transformed version of the lattice with a chosen density reduction. These lattices are interesting candidates for multidimensional wavelet constructions with a limited number of subbands.

The field of signal processing is replete with exemplary problems where the measurements amount to time-delayed and amplitude scaled echoes of some template function or a pulse. When the inter-pulse spacing is favorable, something as primitive as a matched filter serves the purpose of identifying time-delays and amplitudes. When the inter-pulse spacing poses an algorithmic challenge, high-resolution methods such as finite-rate-of-innovation (FRI) may be used. However, in many practical cases of interest, the template function may be distorted due to physical properties of propagation and transmission. Such cases can not be handled well by existing signal models. Inspired by problems in spectroscopy, radar, photoacoustic imaging and ultra-wide band arrays, on which we base our case studies, in this work we take a step towards recovering spikes from time-varying pulses. To this end, we re-purpose the FRI method and extend its utility to the case of phase distorted pulses. Application of our algorithm on the above-mentioned case studies results in substantial improvement in peak-signal-to-noise ratio, thus promising interesting future directions.

The ARSIS concept is designed to increase the spatial resolution of an image without modification of its spectral contents, by merging structures extracted from a higher resolution image of the same scene, but in a different spectral band. It makes use of wavelet transforms and multiresolution analysis. It is currently applied in an operational way with dyadic wavelet transforms that limit the merging of images whose ratio of their resolution is a power of 2. Rational discrete wavelet transforms can be approximated numerically by rational filter banks which would enable a more general merging. Indeed, in theory, the ratio of the resolution of the images to merge is a power of a certain family of rational numbers. The aim of this paper is to examine whether the use of those approximations of rational wavelet transforms are efficient within the ARSIS concept. This work relies on a particular case: the merging of a 10 m SPOT Panchromatic image and a 30 m Landsat Thematic Mapper multispectral image to synthesize 10m multispectral image TM-HR.

Contrary to the usual processing approaches which consist in approximating all the pixels of an image (often by optimizing some criterion), we propose to approximate the processing itself using a linear combination of a few basic non-linear processings --- "thresholds". Accordingly, we term this approach "Linear Expansion of Thresholds" (LET).
The whole adaptivity of LET algorithms is then concealed in the few (linear) coefficients of the representation, which can be optimized using the same criterion as the one chosen in usual approaches (e.g., Maximum a Posteriori, sparse regularization, total variation, etc.) or statistical estimates of the Mean-Square Error (MSE) like Stein's Unbiased Risk Estimate (SURE). This approach lends itself quite well to iterations (i-LET) allowing to refine first-order solutions.
The main advantage of the LET representation is its high implementation efficiency due to a large dimensionality reduction (from the many pixels of an image, to the few coefficients of the LET representation), and due to its linearity, which preserves the convexity and quadraticity of the optimization criterion.
This approach has been applied with success to several image distortion problems: image denoising/deconvolution (MSE estimate-based criterion), and sparse image restoration (data-term + l1 regularization). In all these applications, the quality of the results either reach, or set a new state-of-the-art, while being substantially faster.

Image denoising consists in approximating the noiseless image by performing some, usually non-linear, processing of the noisy image. Most standard techniques involve assumptions on the result of this processing (sparsity, low high-frequency contents, etc.); i.e., the denoised image. Instead, the SURE-LET methodology that we promote consists in approximating the processing itself (seen as a function) in some linear combination of elementary non-linear processings (LET: Linear Expansion of Thresholds), and to optimize the coefficients of this combination by minimizing a statistically unbiased estimate of the Mean Square Error (SURE: Stein's Unbiased Risk Estimate, for additive Gaussian noise).

This tutorial will introduce the technique to the attendance, will outline its advantages (fast, noise-robust, flexible, image adaptive). A very complete set of results will be shown and compared with the state-of-the-art. Extensions of the approach to Poisson noise reduction with application to microscopy imaging will also be shown.

The problem of reconstructing or estimating partially observed or sampled signals is an old and important one, and finds application in many areas that involve data acquisition. Traditional sampling and reconstruction approaches are heavily influenced by the classical Shannon sampling theory which gives an exact sampling and interpolation formula for bandlimited signals.

Recently, the classical Shannon sampling framework has been extended to classes of non-bandlimited structured signals, which we call signals with Finite Rate of Innovation. In these new sampling schemes, the prior that the signal is sparse in a basis or in a parametric space takes the form of a linear system of equations expressing the annihilation of signal-derived quantities. The coefficients of this annihilation system are then related in a non-linear way (e.g., polynomial roots) to the sparse signal parameters; i.e., its "innovations". This leads to new exact reconstruction formulas and fast algorithms that achieve such reconstructions.

We will show how these algorithms are able to deal succesfully with noise issues, leading to statistically optimal recovery, and we will exemplify this theory with a number of applications that benefit from these novel schemes.

We describe a property satisfied by a class of nonlinear systems of equations that are of the form $\F(\Omega)\X=\Y$. Here $\F(\Omega)$ is a matrix that depends on an unknown $K$-dimensional vector $\Omega$, $\X$ is an unknown $K$-dimensional vector and $\Y$ is a vector of $N$ $\ge K$) given measurements. Such equations are encountered in superresolution or sparse signal recovery problems known as ``Finite Rate of Innovation'' signal reconstruction.

We show how this property allows to solve explicitly for the unknowns $\Omega$ and $\X$ by a direct, non-iterative, algorithm that involves the resolution of two linear systems of equations and the extraction of the roots of a polynomial and give examples of problems where this type of solutions has been found useful.

A novel methodology for restoring signal/images from noisy measurements will be presented. Contrary to the usual approaches (Bayesian, sparse-based), there is no prior modelization of the noiseless signal. Instead, it is the reconstruction algorithm itself that is parametrized, or approximated (using a Linear Expansion of Thresholds: LET).

These parameters are then optimized by minimizing an estimate of the MSE between the (unknown) noiseless signal and the one processed by the algorithm. Surprisingly but admirably, it is possible to build such an estimate --- Stein's Unbiased Risk Estimate (SURE) --- using the noisy signal only, and without making any hypothesis on the noiseless signal. The only hypothesis is on the statistics of the noise (additive, Gaussian).

Examples on image denoising are shown to validate the efficiency of this methodology.

In this paper, we present a new algorithm for the design of orthonormal two-band rational filter banks. Owing to the connection between iterated rational filter banks and rational wavelets, this is also a design algorithm for orthonormal rational wavelets. It is basically a simple iterative procedure, which explains its exponential convergence and adaptability under various linear constraints (e.g., regularity). Although the filters obtained from this algorithm are suboptimally designed, they show excellent frequency selectivity.

After an in-depth account of the algorithm, we discuss the properties of the rational wavelets generated by some designed filters. In particular, we stress the possibility to design "almost" shift error-free wavelets, which allows the implementation of a rational wavelet transform.

For FIR filters, limit functions generated in iterated rational schemes are not invariant under shift operations, unlike what happens in the dyadic case: this feature prevents an analysis iterated rational filter bank (AIRFB) behaving exactly as a discrete wavelet transform, even though an adequate choice of the generating filter makes it possible to minimize its consequences. This paper indicates how to compute the error between an "average" shifted function and these limit functions, an open problem until now. Also connections are pointed out between this shift error and the selectivity of the AIRFB.

This paper proposes a regular third-of-an-octave filter bank for high fidelity audio coding. The originality here is twofold: first, the filter bank is an iterated orthonormal rational filter bank for which the generating filters have been designed so that its outputs closely approximate a wavelet transform. This is different from the known coding algorithms which all use an integer filter bank, and most often a uniform one; second, the masking procedure itself is modelized with the help of a wavelet transform unlike the classical procedure in which a short time spectrum is computed and which gives rise to unwanted preecho effects. The masking procedure is then made equivalent to a quantization procedure. A simple non-optimized algorithm has been worked out in order to show the benefits of such a structure, especially in terms of preecho (which is perceptually inaudible), and the disadvantages, especially as far as delay is concerned.

The following thesis is merely focused on the iteration of discrete schemes which allow fractional sampling: this is a generalization of better known time-frequency tools, namely the dyadic schemes. It is shown in particular how classical results can be extended to the ``rational'' case, and how the arising new problems are solved...

One of the most interesting results is indeed the existence of limit functions associated to the iterated schemes, thus providing an interpretation of the iterated rational filter bank as the discrete form of a time-scale transform. However, the fact that shift invariance between these functions is not preserved prevents the time-scale transform to behave exactly as a wavelet transform. The amount of shift error is quantified under the name ``amnesia''. The pro¬perties of the limit functions (regularity, amnesia among others) are studied in details, while the implications on the iterated filter bank are explicited.

On the other side, the discrete features of the filter bank are studied as far as implementation (finite precision) or filter design are concerned: in the latter case, an algorithm which proved to be very efficient is described. Finally, an application to high fidelity sound coding has been implemented: a new formulation of the psychoacoustic masking effect under the form of a wavelet transform, and the application of the theoretical considerations developed in the previous chapters, lead to a new coding algorithm whose main characteristic is the inaudibility of the classical preecho effect.

Some properties of two-band filter banks with rational rate changes ("rational filter banks") are first reviewed. Focusing then on iterated rational filter banks, compactly supported limit functions are obtained, in the same manner as previously done for dyadic schemes, allowing a characterization of such filter banks. These functions are carefully studied and the properties they share with the dyadic case are highlighted. They are experimentally observed to verify a "shift property" (strictly verified in the dyadic ease) up to an error which can be made arbitrarily small when their regularity increases. In this case, the high-pass outputs of an iterated filter bank can be very close to samples of a discrete wavelet transform with the same rational dilation factor. Straightforward extension of the formalism of multiresolution analysis is also made. Finally, it is shown that if one is ready to put up with the loss of the shift property, rational iterated filter banks can be used in the same manner as if they were dyadic filter banks, with the advantage that rational dilation factors can be chosen closer to 1.

The term “Rational Filter Bank” (RFB) stands for “Filter Bank with Rational Rate Changes”. An analysis two-band RFB critically sampled is shown with its synthesis counterpart in figure 1. G stands typically for a low-pass FIR filter, whereas H is high-pass FIR. We are interested, in this paper in the iteration of the sole low-pass branch, which leads, in the integer case (q = 1), to a wavelet decomposition.

Kovacevic and Vetterli have wondered whether iterated RFB could involve too, a discrete wavelet transform. Actually, Daubechies proved that whenever p/q is not an integert and G is FIR, this could not be the case. We here show that despite this discouraging feature, there still exists, not only one function (then shifted), as in the integer case, but an infinite set of compactly supported functions φs(t). More importantly, under certain conditions, these functions appear to be "almost" the shifted version of one sole function. These φs are constructed the same way as in the dyadic case (p = 2, q = 1), that is to say by the iteration of the low-pass branch of a synthesis RFB, but in this case the initialization is meaningful.

We show the feasibility and the potential of a new signal processing algorithm for the high-resolution deconvolution of OCT signals.

Our technique relies on the description of the measures in a parametric form, each set of four parameters describing the optical characteristics of a physical interface (e.g., complex refractive index, depth). Under the hypothesis of a Gaussian source light, we show that it is possible to recover the 4K parameters corresponding to K interfaces using as few as 4K uniform samples of the OCT signal. With noisy data, we can expect the robustness of our method to increase with the oversampling rate—or with the redundancy of the measures.

The validation results show that the quality of the estimation of the parameters (in particular the depth of the interfaces) is narrowly linked to the noise level of the OCT measures—and not to the coherence length of the source light—and to their degree of redundancy.

Signal acquisition and reconstruction is at the heart of signal processing, and sampling theorems provide the bridge between the continuous and the discrete-time worlds. The most celebrated and widely used sampling theorem is often attributed to Shannon (and many others, from Whittaker to Kotel′nikov and Nyquist, to name a few) and gives a sufficient condition, namely bandlimitedness, for an exact sampling and interpolation formula. The sampling rate, at twice the maximum frequency present in the signal, is usually called the Nyquist rate. Bandlimitedness, however, is not necessary as is well known but only rarely taken advantage of [1]. In this broader, nonbandlimited view, the question is: when can we acquire a signal using a sampling kernel followed by uniform sampling and perfectly reconstruct it?

The Shannon case is a particular example, where any signal from the subspace of bandlimited signals, denoted by BL, can be acquired through sampling and perfectly interpolated from the samples. Using the sinc kernel, or ideal low-pass filter, nonbandlimited signals will be projected onto the subspace BL. The question is: can we beat Shannon at this game, namely, acquire signals from outside of BL and still perfectly reconstruct? An obvious case is bandpass sampling and variations thereof. Less obvious are sampling schemes taking advantage of some sort of sparsity in the signal, and this is the central theme of this article. That is, instead of generic bandlimited signals, we consider the sampling of classes of nonbandlimited parametric signals. This allows us to circumvent Nyquist and perfectly sample and reconstruct signals using sparse sampling, at a rate characterized by how sparse they are per unit of time. In some sense, we sample at the rate of innovation of the signal by complying with Occam's razor principle [known as Lex Parcimoniæ or Law of Parsimony: Entia non svnt mvltiplicanda præter necessitatem, or, “Entities should not be multiplied beyond necessity” (from Wikipedia)].

Besides Shannon's sampling theorem, a second basic result that permeates signal processing is certainly Heisenberg's uncertainty principle, which suggests that a singular event in the frequency domain will be necessarily widely spread in the time domain. A superficial interpretation might lead one to believe that a perfect frequency localization requires a very long time observation. That this is not necessary is demonstrated by high resolution spectral analysis methods, which achieve very precise frequency localization using finite observation windows [2], [3]. The way around Heisenberg resides in a parametric approach, where the prior that the signal is a linear combination of sinusoids is put to contribution.

If by now you feel uneasy about slaloming around Nyquist, Shannon, and Heisenberg, do not worry. Estimation of sparse data is a classic problem in signal processing and communications, from estimating sinusoids in noise, to locating errors in digital transmissions. Thus, there is a wide variety of available techniques and algorithms. Also, the best possible performance is given by the Cramér-Rao lower bounds for this parametric estimation problem, and one can thus check how close to optimal a solution actually is.

We are thus ready to pose the basic questions of this article. Assume a sparse signal (be it in continuous or discrete time) observed through a sampling device that is a smoothing kernel followed by regular or uniform sampling. What is the minimum sampling rate (as opposed to Nyquist's rate, which is often infinite in cases of interest) that allows to recover the signal? What classes of sparse signals are possible? What are good observation kernels, and what are efficient and stable recovery algorithms? How does observation noise influence recovery, and what algorithms will approach optimal performance? How will these new techniques impact practical applications, from inverse problems to wideband communications? And finally, what is the relationship between the presented methods and classic methods as well as the recent advances in compressed sensing and sampling?

Wavelet theory was born in the mid-1980s in response to the time-frequency resolution problems of Fourier-type methods. Indeed, many non-stationary signals call for an analysis whose spectral (resp. temporal) resolution varies with the temporal (resp. spectral) localization. It is to allow this flexibility that wavelets, a new analysis concept called "multi-resolution" or "multi-scale," have been brought to light.

After a brief presentation of the continuous wavelet transform, we shall focus on its discrete version, notably the Mallat algorithm, which is for the wavelet transform what the FFT is for the Fourier transform. We shall also consider the important problem of the design of wavelet generator filters (Daubechies filters, for example).

Furthermore, we shall study some recent generalizations or extensions (in particular, multi-wavelets, wavelet packets, and frames) that were motivated by certain limitations of wavelet theory.

Finally, we shall discuss some applications that caused the present success of wavelets and, more generally, of time-scale methods (compression and denoising, aligning images, etc.).

One of the aims of this chapter will thus be to demonstrate the cross-fertilization between sometimes quite theoretical approaches, where mathematics and engineering sciences are happily united.

Image denoising consists in approximating the noiseless image by performing some, usually non-linear, processing of the noisy image. Most standard techniques involve assumptions on the result of this processing (sparsity, low high-frequency contents, etc.); i.e., the denoised image. Instead, the SURE-LET methodology that we promote consists in approximating the processing itself (seen as a function) in some linear combination of elementary non-linear processings (LET: Linear Expansion of Thresholds), and to optimize the coefficients of this combination by minimizing a statistically unbiased estimate of the Mean Square Error (SURE: Stein's Unbiased Risk Estimate, for additive Gaussian noise).

This tutorial will introduce the technique to the attendance, will outline its advantages (fast, noise-robust, flexible, image adaptive). A very complete set of results will be shown and compared with the state-of-the-art. Extensions of the approach to Poisson noise reduction with application to microscopy imaging will also be shown.

We propose a new approach to image denoising, based on the image-domain minimization of an estimate of the mean squared error—Stein's unbiased risk estimate (SURE). Unlike most existing denoising algorithms, using the SURE makes it needless to hypothesize a statistical model for the noiseless image. A key point of our approach is that, although the (nonlinear) processing is performed in a transformed domain—typically, an undecimated discrete wavelet transform, but we also address nonorthonormal transforms—this minimization is performed in the image domain. Indeed, we demonstrate that, when the transform is a “tight” frame (an undecimated wavelet transform using orthonormal filters), separate subband minimization yields substantially worse results. In order for our approach to be viable, we add another principle, that the denoising process can be expressed as a linear combination of elementary denoising processes—linear expansion of thresholds (LET). Armed with the SURE and LET principles, we show that a denoising algorithm merely amounts to solving a linear system of equations which is obviously fast and efficient. Quite remarkably, the very competitive results obtained by performing a simple threshold (image-domain SURE optimized) on the undecimated Haar wavelet coefficients show that the SURE-LET principle has a huge potential.

Estimating the displacements between two images is often addressed using a small displacement assumption, which leads to what is known as the optical flow equation. We study the quality of the underlying approximation for the recently developed Local All-Pass (LAP) optical flow algorithm, which is based on another approach—displacements result from filtering. While the simplest version of LAP computes only first-order differences, we show that the order of LAP approximation is quadratic, unlike standard optical flow equation based algorithms for which this approximation is only linear. More generally, the order of approximation of the LAP algorithm is twice larger than the differentiation order involved. The key step in the derivation is the use of Padé approximants.

The regularity property was first introduced by wavelet theory for octave-band dyadic filter banks. In the present work, the authors provide a detailed theoretical analysis of the regularity property in the more flexible case of filter banks with rational sampling changes. Such filter banks provide a finer analysis of fractions of an octave, and regularity is as important as in the dyadic case. Sharp regularity estimates for any filter bank are given. The major difficulty of the rational case, as compared with the dyadic case, is that one obtains wavelets that are not shifted versions of each other at a given scale. It is shown, however, that, under regularity conditions, shift invariance can almost be obtained. This is a desirable property for, e.g., coding applications and for efficient filter bank implementation of a continuous wavelet transform.

We consider the approximation (either interpolation, or least-squares) of L2 functions in the shift-invariant space VT = spank∈Z { φ(t ⁄ T − k) } that is generated by the single shifted function φ. We measure the approximation error in an L2 sense and evaluate the asymptotic equivalent of this error as the sampling step T tends to zero. Let ƒ ∈ L2 and ƒT be its approximation in VT. It is well-known that, if φ satisfies the Strang-Fix conditions of order L, and under mild technical constraints, || ƒ − ƒT || L2 = O(TL).

In this presentation however, we want to be more accurate and concentrate on the constant Cφ which is such that || ƒ − ƒT || L2 = Cφ || ƒ(L) || L2 TL + o(TL).

We present a simple, original method to improve piecewise-linear interpolation with uniform knots: we shift the sampling knots by a fixed amount, while enforcing the interpolation property. We determine the theoretical optimal shift that maximizes the quality of our shifted linear interpolation. Surprisingly enough, this optimal value is nonzero and close to 1⁄5.

We confirm our theoretical findings by performing several experiments: a cumulative rotation experiment and a zoom experiment. Both show a significant increase of the quality of the shifted method with respect to the standard one. We also observe that, in these results, we get a quality that is similar to that of the computationally more costly “high-quality” cubic convolution.

Erratum

p. 712, second column, fourth line below equation (13), there is a typographical error. The corrected filter should read sinc2(ω⁄2π) instead of sin c2(ω⁄2π).

We present a procedure for designing interpolation kernels that are adapted to time signals; i.e., they are causal, even though they do not have a finite support. The considered kernels are obtained by digital IIR filtering of a finite support function that has maximum approximation order. We show how to build these kernel starting from the all-pole digital filter and we give some practical design examples.

Every now and then, a new design of an interpolation kernel shows up in the literature. While interesting results have emerged, the traditional design methodology proves laborious and is riddled with very large systems of linear equations that must be solved analytically. In this paper, we propose to ease this burden by providing an explicit formula that will generate every possible piecewise-polynomial kernel given its degree, its support, its regularity, and its order of approximation. This formula contains a set of coefficients that can be chosen freely and do not interfere with the four main design parameters; it is thus easy to tune the design to achieve any additional constraints that the designer may care for.

We present a simple, original method to improve piecewise linear interpolation with uniform knots: We shift the sampling knots by a fixed amount, while enforcing the interpolation property.

Thanks to a theoretical analysis, we determine the optimal shift that maximizes the quality of our shifted linear interpolation. Surprisingly enough, this optimal value is nonzero and it is close to 1 ⁄ 5.

We confirm our theoretical findings by performing a cumulative rotation experiment, which shows a significant increase of the quality of the shifted method with respect to the standard one. Most interesting is the fact that we get a quality similar to that of high-quality cubic convolution at the computational cost of linear interpolation.

We consider the problem of interpolating a signal using a linear combination of shifted versions of a compactly-supported basis function φ(x). We first give the expression of the φ's that have minimal support for a given accuracy (also known as "approximation order"). This class of functions, which we call maximal-order-minimal-support functions (MOMS), is made of linear combinations of the B-spline of same order and of its derivatives.

We provide the explicit form of the MOMS that maximize the approximation accuracy when the step-size is small enough. We compute the sampling gain obtained by using these optimal basis functions over the splines of same order. We show that it is already substantial for small orders and that it further increases with the approximation order L. When L is large, this sampling gain becomes linear; more specifically, its exact asymptotic expression is (2 L ⁄ (π × e)). Since the optimal functions are continuous, but not differentiable, for even orders, and even only piecewise continuous for odd orders, our result implies that regularity has little to do with approximating performance.

These theoretical findings are corroborated by experimental evidence that involves compounded rotations of images.

We extend the classical interpolation method to generalized interpolation. This extension is done by replacing the interpolating function by a non-interpolating function that is applied to prefiltered data, in order to preserve the interpolation condition. We show, both theoretically and practically, that this approach performs much better than classical methods, for the same computational cost.

We investigate the functions of given approximation order L that have the smallest support. Those are shown to be linear combinations of the B-spline of degree L-1 and its L-1 first derivatives. We then show how to find the functions that minimize the asymptotic approximation constant among this finite dimension space; in particular, a tractable induction relation is worked out. Using these functions instead of splines, we observe that the approximation error is dramatically reduced, not only in the limit when the sampling step tends to zero, but also for higher values up to the Shannon rate. Finally, we show that those optimal functions satisfy a scaling equation, although less simple than the usual two-scale difference equation.

In a companion paper (see Self-Similarity: Part I—Splines and Operators), we characterized the class of scale-invariant convolution operators: the generalized fractional derivatives of order γ. We used these operators to specify regularization functionals for a series of Tikhonov-like least-squares data fitting problems and proved that the general solution is a fractional spline of twice the order. We investigated the deterministic properties of these smoothing splines and proposed a fast Fourier transform (FFT)-based implementation. Here, we present an alternative stochastic formulation to further justify these fractional spline estimators. As suggested by the title, the relevant processes are those that are statistically self-similar; that is, fractional Brownian motion (fBm) and its higher order extensions. To overcome the technical difficulties due to the nonstationary character of fBm, we adopt a distributional formulation due to Gel′fand. This allows us to rigorously specify an innovation model for these fractal processes, which rests on the property that they can be whitened by suitable fractional differentiation. Using the characteristic form of the fBm, we then derive the conditional probability density function (PDF) p(BH(t)|Y), where Y = {BH(k)+n[k]}k∈Z are the noisy samples of the fBm BH(t) with Hurst exponent H. We find that the conditional mean is a fractional spline of degree 2H, which proves that this class of functions is indeed optimal for the estimation of fractal-like processes. The result also yields the optimal [minimum mean-square error (MMSE)] parameters for the smoothing spline estimator, as well as the connection with kriging and Wiener filtering.

We consider the problem of estimating a fractional Brownian motion known only from its noisy samples at the integers. We show that the optimal estimator can be expressed using a digital Wiener-like filter followed by a simple time-variant correction accounting for nonstationarity.

Moreover, we prove that this estimate lives in a symmetric fractional spline space and give a practical implementation for optimal upsampling of noisy fBm samples by integer factors.

We present a new result characterized by an exact integral expression for the approximation error between a probability density and an integer shift invariant estimate obtained from its samples. Unlike the Parzen window estimate, this estimate avoids recomputing the complete probability density for each new sample: only a few coefficients are required making it practical for real-time applications.

We also show how to obtain the exact asymptotic behavior of the approximation error when the number of samples increases and provide the trade-off between the number of samples and the sampling step size.

We present here an explicit time-domain representation of any compactly supported dyadic scaling function as a sum of harmonic splines. The leading term in the decomposition corresponds to the fractional splines, a recent, continuous-order generalization of the polynomial splines.

We describe a new family of scaling functions, the (α, τ)-fractional splines, which generate valid multiresolution analyses. These functions are characterized by two real parameters: α, which controls the width of the scaling functions; and τ, which specifies their position with respect to the grid (shift parameter). This new family is complete in the sense that it is closed under convolutions and correlations.

We give the explicit time and Fourier domain expressions of these fractional splines.

We prove that the family is closed under generalized fractional differentiations, and, in particular, under the Hilbert transformation. We also show that the associated wavelets are able to whiten 1⁄ƒλ-type noise, by an adequate tuning of the spline parameters.

A fast (and exact) FFT-based implementation of the fractional spline wavelet transform is already available. We show that fractional integration operators can be expressed as the composition of an analysis and a synthesis iterated filterbank.

Wavelets and radial basis functions (RBFs) lead to two distinct ways of representing signals in terms of shifted basis functions. RBFs, unlike wavelets, are nonlocal and do not involve any scaling, which makes them applicable to nonuniform grids. Despite these fundamental differences, we show that the two types of representation are closely linked together …through fractals. First, we identify and characterize the whole class of self-similar radial basis functions that can be localized to yield conventional multiresolution wavelet bases. Conversely, we prove that for any compactly supported scaling function φ(x), there exists a one-sided central basis function ρ+(x) that spans the same multiresolution subspaces. The central property is that the multiresolution bases are generated by simple translation of ρ+ without any dilation. We also present an explicit time-domain representation of a scaling function as a sum of harmonic splines. The leading term in the decomposition corresponds to the fractional splines: a recent, continuous-order generalization of the polynomial splines.

We define a new wavelet transform that is based on a recently defined family of scaling functions: the fractional B-splines. The interest of this family is that they interpolate between the integer degrees of polynomial B-splines and that they allow a fractional order of approximation.

The orthogonal fractional spline wavelets essentially behave as a fractional differentiators. This property seems promising for the analysis of 1/fα; noise that can be whitened by an appropriate choice of the degree of the spline transform.

We present a practical FFT-based algorithm for the implementation of these fractional wavelet transforms, and give some examples of processing.

In a previous paper, we proposed a general Fourier method which provides an accurate prediction of the approximation error, irrespective of the scaling properties of the approximating functions. Here, we apply our results when these functions satisfy the usual two-scale relation encountered in dyadic multiresolution analysis. As a consequence of this additional constraint, the quantities introduced in our previous paper can be computed explicitly as a function of the refinement filter. This is in particular true for the asymptotic expansion of the approximation error for biorthonormal wavelets, as the scale tends to zero.

One of the contributions of this paper is the computation of sharp, asymptotically optimal upper bounds for the least-squares approximation error. Another contribution is the application of these results to B-splines and Daubechies scaling functions, which yields explicit asymptotic developments and upper bounds. Thanks to these explicit expressions, we can quantify the improvement that can be obtained by using B-splines instead of Daubechies wavelets. In other words, we can use a coarser spline sampling and achieve the same reconstruction accuracy as Daubechies: Specifically, we show that this sampling gain converges to pi as the order tends to infinity.

We present a general Fourier-based method that provides an accurate prediction of the approximation error as a function of the sampling step T. Our formalism applies to an extended class of convolution-based signal approximation techniques, which includes interpolation, generalized sampling with prefiltering, and the projectors encountered in wavelet theory. We claim that we can predict the L2-approximation error, by integrating the spectrum of the function to approximate—not necessarily bandlimited—against a frequency kernel E(ω) that characterizes the approximation operator. This prediction is easier, yet more precise than was previously available. Our approach has the remarkable property of providing a global error estimate that is the average of the true approximation error over all possible shifts of the input function. Our error prediction is exact for stationary processes, as well as for bandlimited signals. We apply this method to the comparison of standard interpolation and approximation techniques.

Our method has interesting implications for approximation theory. In particular, we use our results to obtain some new asymptotic expansions of the error as T tends to 0, and also to derive improved upper bounds of the kind found in the Strang-Fix theory. We finally show how we can design quasi-interpolators that are near-optimal in the least-squares sense.

We investigate the approximation properties of general polynomial preserving operators that approximate a function into some scaled subspace of L2 via an appropriate sequence of inner products. In particular, we consider integer shift-invariant approximations such as those provided by splines and wavelets, as well as finite elements and multi-wavelets which use multiple generators. We estimate the approximation error as a function of the scale parameter T when the function to approximate is sufficiently regular. We then present a generalized sampling theorem, a result that is rich enough to provide tight bounds as well as asymptotic expansions of the approximation error as a function of the sampling step T. Another more theoretical consequence is the proof of a conjecture by Strang and Fix, stating the equivalence between the order of a multi-wavelet space and the order of a particular subspace generated by a single function. Finally, we consider refinable generating functions and use the two-scale relation to obtain explicit formulae for the coefficients of the asymptotic development of the error. The leading constants are easily computable and can be the basis for the comparison of the approximation power of wavelet and multi-wavelet expansions of a given order L.

A filterbank decomposition can be seen as a series of projections onto several discrete wavelet subspaces. In this presentation, we analyze the projection onto one of them—the low-pass one, since many signals tend to be low-pass. We prove a general but simple formula that allows the computation of the l2-error made by approximating the signal by its projection. This result provides a norm for evaluating the accuracy of a complete decimation/interpolation branch for arbitrary analysis and synthesis filters; such a norm could be useful for the joint design of an analysis and synthesis filter, especially in the non-orthonormal case. As an example, we use our framework to compare the efficiency of different wavelet filters, such as Daubechies' or splines. In particular, we prove that the error made by using a Daubechies' filter downsampled by 2 is of the same order as the error using an orthonormal spline filter downsampled by 6. This proof is valid asymptotically as the number of regularity factors tends to infinity, and for a signal that is essentially low-pass. This implies that splines bring an additional compression gain of at least 3 over Daubechies' filters, asymptotically.

We introduce a simple method—integration of the power spectrum against a Fourier kernel—for computing the approximation error by wavelets. This method is powerful enough to recover all classical L2 results in approximation theory (Strang-Fix theory), and also to provide new error estimates that are sharper and asymptotically exact.

Our goal in this paper is to set a theoretical basis for the comparison of re-sampling and interpolation methods. We consider the general problem of the approximation of an arbitrary continuously-defined function f(x)—not necessarily bandlimited—when we vary the sampling step T. We present an accurate L2 computation of the induced approximation error as a function of T for a general class of linear approximation operators including interpolation and other kinds of projectors. This new quantitative result provides exact expressions for the asymptotic development of the error as T→0, and also sharp (asymptotically exact) upper bounds.

By evaluating approximation theoretic quantities we show how to compute explicitely the basis generators that minimize the approximation error for a full set of functions to approximate. We give several examples of this optimization, either to get the best generators that have maximal order for minimum support [1], or to design the best interpolation scheme with classical generators, such as B-splines [2]. We present practical examples that visually confirm the validity of our approach.

Method of interpolating digital samples using interpolation functions that are shifted by an arbitrary shift value relative to said samples. It will be shown that there is a non-zero and non-trivial optimal value for this shift value for which the approximation error is minimized.

Discretization of continuous (analog) convolution operators by direct sampling of the convolution kernel and use of fast Fourier transforms (FFT) is highly efficient. However, it assumes the input and output signals are band-limited, a condition rarely met in practice, where signals have finite support or abrupt edges and sampling is non-ideal. Here, we propose to approximate signals in analog, shift-invariant function spaces, which do not need to be band-limited, resulting in discrete coefficients for which we derive discrete convolution kernels that accurately model the analog convolution operator while taking into account non-ideal sampling devices (such as finite fill-factor cameras). This approach retains the efficiency of direct sampling but not its limiting assumption. We propose fast forward and inverse algorithms that handle finite-length, periodic, and mirror-symmetric signals with rational sampling rates. We provide explicit convolution kernels for computing coherent wave propagation in the context of digital holography. When compared to band-limited methods in simulations, our method leads to fewer reconstruction artifacts when signals have sharp edges or when using non-ideal sampling devices.

We investigate the use of quasi-interpolating approximation schemes, to construct an estimate of an unknown function from its given discrete samples. We show theoretically and with practical experiments that such methods perform better than classical interpolation, for the same computation cost.

We provide a new comparison between hexagonal and orthogonal lattices, based on approximation theory. For each of the lattices, we select the “natural” spline basis function as generator for a shift-invariant function space; i.e., the tensor-product B-splines for the orthogonal lattice and the non-separable hex-splines for the hexagonal lattice. For a given order of approximation, we compare the asymptotic constants of the error kernels, which give a very good indication of the approximation quality. We find that the approximation quality on the hexagonal lattice is consistently better, when choosing lattices with the same sampling density. The area sampling gain related to these asymptotic constants quickly converges when the order of approximation of the basis functions increases. Surprisingly, nearest-neighbor interpolation does not allow to profit from the hexagonal grid. For practical purposes, the second-order hex-spline (i.e., constituted by linear patches) appears as a particularly useful candidate to exploit the advantages of hexagonal lattices when representing images on them.

Due to the random nature of photon emission and the various internal noise sources of the detectors, real timelapse fluorescence microscopy images are usually modeled as the sum of a Poisson process plus some Gaussian white noise. In this paper, we propose an adaptation of our SURE-LET denoising strategy to take advantage of the potentially strong similarities between adjacent frames of the observed image sequence. To stabilize the noise variance, we first apply the generalized Anscombe transform using suitable parameters automatically estimated from the observed data. With the proposed algorithm, we show that, in a reasonable computation time, real fluorescence timelapse microscopy images can be denoised with higher quality than conventional algorithms.

Digital holographic microscopy appears as a new imaging technique with high resolution and real time observation capabilities: longitudinal resolutions of a few nanometers in air and a few tenths of nanometers in liquids are achievable, provided that optical signals diffracted by the object can be rendered sufficiently large. Living biological cells in culture, have been observed with around 40 nanometers in height and half of a micron in width. The originality of our approach is to provide both a slightly modified microscope design, yielding digital holograms of microscopic objects and an interactive computer environment to easily reconstruct wavefronts from digital holograms.

Finite-rate-of-innovation (FRI) is a framework that has been developed for the sampling and reconstruction of specific classes of signals, in par- ticular non-bandlimited signals that are characterized byfinitely many pa- rameters. It has been shown that by using specific sampling kernels that reproduce polynomials or exponentials (i.e., satisfy Strang-Fix condition), it is possible to design non-iterative and fast reconstruction algorithms. In fact, the innovative part of the signal can be reconstructed perfectly using Prony's method (the annihilatingfilter).
In this paper, we propose an adapted FRI framework to deal with the inverse source problem of radiating fields from boundary measurements. In particular, we consider the case where the source signals are modelled as stream of Diracs in 3-D and, we assume that the induced field governed by the Helmholtz equation is measured on a boundary. First, we propose a technique, termed "sensing principle"—also known as the reciprocity gap principle—to provide a link between the physical measurements and the source signal through a surface integral. We have shown that it is possible to design sensing schemes in complex domain using holomorphic functions such that they allow to determine the positions of the sources with a non-iterative algorithm using an adapted annihilating filter method.

Photoacoustic tomography (PAT) is a relatively recent imaging modality that is promising for breast cancer detection and breast screening. It combines the high intrinsic contrast of optical radiation with acoustic imaging at submillimeter spatial resolution through the photoacoustic effect of absorption and thermal expansion. However, image reconstruction from boundary measurements of the propagating wave field is still a challenging inverse problem.
Here we propose a new theoretical framework, for which we coin the term eigensensing, to recover the heat absorption profile of the tissue. One of the main features of our method is that there is no explicit forward model that needs to be used within a (usually) slow iterative scheme. Instead, the eigensensing principle allow us to computationally obtain several intermediate images that are blurred by known convolution kernels which are chosen as the eigenfunctions of the spatial Laplace operator. The source image can then be reconstructed by a joint deconvolution algorithm that uses the intermediate images as input. Moreover, total variation regularization is added to make the inverse problem well-posed and to favor piecewise-smooth images.

Finite rate of innovation (FRI) is a recent framework for sampling and reconstruction of a large class of parametric signals that are characterized by finite number of innovations (parameters) per unit interval. In the absence of noise, exact recovery of FRI signals has been demonstrated. In the noisy scenario, there exist techniques to deal with non-ideal measurements. Yet, the accuracy and resiliency to noise and model mismatch are still challenging problems for real-world applications. We address the reconstruction of FRI signals, specifically a stream of Diracs, from few signal samples degraded by noise and we propose a new FRI reconstruction method that is based on a model-fitting approach related to the structured-TLS problem. The model-fitting method is based on minimizing the training error, that is, the error between the computed and the recovered moments (i.e., the FRI-samples of the signal), subject to an annihilation system. We present our framework for three different constraints of the annihilation system. Moreover, we propose a model order selection framework to determine the innovation rate of the signal; i.e., the number of Diracs by estimating the noise level through the training error curve. We compare the performance of the model-fitting approach with known FRI reconstruction algorithms and Cramer-Rao's lower bound (CRLB) to validate these contributions.

We address the problem of localizing point sources in 3D from boundary measurements of a wave field. Recently, we proposed the sensing principle which allows extracting volumetric samples of the unknown source distribution from the boundary measurements. The extracted samples allow a non-iterative reconstruction algorithm that can recover the parameters of the source distribution projected on a 2-D plane in the continuous domain without any discretization.
Here we extend the method for the 3-D localization of multiple point sources by combining multiple 2-D planar projections. In particular, we propose a three-step algorithm to retrieve the locations by means of multiplanar application of the sensing principle. First, we find the projections of the locations onto several 2-D planes. Second, we propose a greedy algorithm to pair the solutions in each plane. Third, we retrieve the 3D locations by least squares regression.

Photoacoustic tomography (PAT) is a hybrid imaging method, which combines ultrasonic and optical imaging modalities, in order to overcome their respective weaknesses and to combine their strengths. It is based on the reconstruction of optical absorption properties of the tissue from the measurements of a photoacoustically-generated pressure field. Current methods consider laser excitation, under thermal and stress confinement assumptions, which leads to the generation of a propagating pressure field. Conventional reconstruction techniques then recover the initial pressure field based on the boundary measurements by iterative reconstruction algorithms in time- or Fourier-domain. Here, we propose an application of a new sensing principle that allows for efficient and non-iterative reconstruction algorithm for imaging point absorbers in PAT. We consider a closed volume surrounded by a measurement surface in an acoustically homogeneous medium and we aim at recovering the positions and the amount of heat absorbed by these absorbers. We propose a two-step algorithm based on proper choice of so-called sensing functions. Specifically, in the first step, we extract the projected positions on the complex plane and the weights by a sensing function that is well-localized on the same plane. In the second step, we recover the remaining z-location by choosing a proper set of plane waves. We show that the proposed families of sensing functions are sufficient to recover the parameters of the unknown sources without any discretization of the domain. We extend the method for sources that have joint-sparsity; i.e., the absorbers have the same positions for different frequencies. We evaluate the performance of the proposed algorithm using simulated and noisy sensor data and we demonstrate the improvement obtained by exploiting joint sparsity.

Reconstruction of point sources from boundary measurements is a challenging problem in many applications. Recently, we proposed a new sensing and non-iterative reconstruction scheme for systems governed by the three-dimensional wave equation. The points sources are described by their magnitudes and positions. The core of the method relies on the principles of finite-rate-of-innovation, and allows retrieving the parameters in the continuous domain without discretization. Here we extend the method when the source configuration shows joint sparsity for different temporal frequencies; i.e., the sources have same positions for different frequencies, not necessarily the same magnitudes. We demonstrate that joint sparsity improves upon the robustness of the estimation results. In addition, we propose a modified multi-source version of Dijkstra's algorithm to recover the Z parameters. We illustrate the feasibility of our method to reconstruct multiple sources in a 3-D spherical geometry.

Analytic sensing has recently been proposed for source localization from boundary measurements using a generalization of the finite-rate-of-innovation framework. The method is tailored to the quasi-static electromagnetic approximation, which is commonly used in electroencephalography. In this work, we extend analytic sensing for physical systems that are governed by the wave equation; i.e., the sources emit signals that travel as waves through the volume and that are measured at the boundary over time. This source localization problem is highly ill-posed (i.e., the unicity of the source distribution is not guaranteed) and additional assumptions about the sources are needed. We assume that the sources can be described with finite number of parameters, particularly, we consider point sources that are characterized by their position and strength. This assumption makes the solution unique and turns the problem into parametric estimation. Following the framework of analytic sensing, we propose a two-step method. In the first step, we extend the reciprocity gap functional concept to wave-equation based test functions; i.e., well-chosen test functions can relate the boundary measurements to generalized measure that contain volumetric information about the sources within the domain. In the second step-again due to the choice of the test functions - we can apply the finite-rate-of-innovation principle; i.e., the generalized samples can be annihilated by a known filter, thus turning the non-linear source localization problem into an equivalent root-finding one. We demonstrate the feasibility of our technique for a 3-D spherical geometry. The performance of the reconstruction algorithm is evaluated in the presence of noise and compared with the theoretical limit given by Cramer-Rao lower bounds.

Consider the problem of sampling signals which are not bandlimited, but still have a finite number of degrees of freedom per unit of time, such as, for example, nonuniform splines or piecewise polynomials, and call the number of degrees of freedom per unit of time the rate of innovation. Classical sampling theory does not enable a perfect reconstruction of such signals since they are not bandlimited. Recently, it was shown that, by using an adequate sampling kernel and a sampling rate greater or equal to the rate of innovation, it is possible to reconstruct such signals uniquely [1]. These sampling schemes, however, use kernels with infinite support, and this leads to complex and potentially unstable reconstruction algorithms. In this paper, we show that many signals with a finite rate of innovation can be sampled and perfectly reconstructed using physically realizable kernels of compact support and a local reconstruction algorithm. The class of kernels that we can use is very rich and includes functions satisfying Strang-Fix conditions, exponential splines and functions with rational Fourier transform. This last class of kernels is quite general and includes, for instance, any linear electric circuit. We thus show with an example how to estimate a signal of finite rate of innovation at the output of an RC circuit. The case of noisy measurements is also analyzed, and we present a novel algorithm that reduces the effect of noise by oversampling.

Our goal is to detect and localize areas of activation in the brain from sequences of fMRI images. The standard approach for reducing the noise contained in the fMRI images is to apply a spatial Gaussian filter which entails some loss of details. Here instead, we consider a wavelet solution to the problem, which has the advantage of retaining high-frequency information. We use fractional-spline orthogonal wavelets with a continuously-varying order parameter alpha; by adjusting alpha, we can balance spatial resolution against frequency localization. The activation pattern is detected by performing multiple (Bonferroni-corrected) t-tests in the wavelet domain. This pattern is then localized by inverse wavelet transform of a thresholded coefficient map.

In order to compare transforms and to select the best alpha, we devise a simulation study for the detection of a known activation pattern. We also apply our methodology to the analysis of acquired fMRI data for a motor task.

Ruttimann et al. have proposed to use the wavelet transform for the detection and localization of activation patterns in functional magnetic resonance imaging (fMRI). Their main idea was to apply a statistical test in the wavelet domain to detect the coefficients that are significantly different from zero. Here, we improve the original method in the case of non-stationary Gaussian noise by replacing the original z-test by a t-test that takes into account the variability of each wavelet coefficient separately. The application of a threshold that is proportional to the residual noise level, after the reconstruction by an inverse wavelet transform, further improves the localization of the activation pattern in the spatial domain.

A key issue is to find out which wavelet and which type of decomposition is best suited for the detection of a given activation pattern. In particular, we want to investigate the applicability of alternative wavelet bases that are not necessarily orthogonal. For this purpose, we consider the various brands of fractional spline wavelets (orthonormal, B-spline, and dual) which are indexed by a continuously-varying order parameter α. We perform an extensive series of tests using simulated data and compare the various transforms based on their false detection rate (type I + type II errors). In each case, we observe that there is a strongly optimal value of α and a best number of scales that minimizes the error. We also find that splines generally outperform Daubechies wavelets and that they are quite competitive with SPM (the standard analysis method used in the field), although it uses much simpler statistics. An interesting practical finding is that performance is strongly correlated with the number of coefficients detected in the wavelet domain, at least in the orthonormal and B-spline cases. This suggest that it is possible to optimize the structural wavelet parameters simply by maximizing the number of wavelet counts, without any prior knowledge of the activation pattern. Some examples of analysis of real data are also presented.

Functional magnetic resonance imaging (fMRI) is a recent technique that allows the measurement of brain metabolism (local concentration of deoxyhemoglobin using BOLD contrast) while subjects are performing a specific task. A block paradigm produces alternating sequences of images (e.g., rest versus motor task). In order to detect and localize areas of cerebral activation, one analyzes the data using paired differences at the voxel level. As an alternative to the traditional approach which uses Gaussian spatial filtering to reduce measurement noise, we propose to analyze the data using an orthogonal filterbank. This procedure is intended to simplify and eventually imp ove the statistical analysis. The system is designed to concentrate the signal into a fewer number of components thereby improving the signal-to-noise ratio. Thanks to the orthogonality property, we can test the filtered components independently on a voxel-by-voxel basis; this testing procedure is optimal fo i.i.d. measurement noise. The number of components to test is also reduced because of down-sampling. This offers a straightforward approach to increasing the sensitivity of the analysis (lower detection threshold) while applying the standard Bonferroni correction fo multiple statistical tests. We present experimental results to illustrate the procedure. In addition, we discuss filter design issues. In particular, we introduce a family of orthogonal filters which are such that any integer reduction m can be implemented as a succession of elementary reductions m1 to mp where m = m1 ... mp is a prime number factorization of m.

How vowels are organized cortically has previously been studied using auditory evoked potentials (AEPs), one focus of which is to determine whether perceptual distance could be inferred using AEP components. The present study extends this line of research by adopting a machine-learning framework to classify evoked responses to four synthetic mid-vowels differing only in second formant frequency (F2 = 840, 1200, 1680, and 2280 Hz). 6 subjects attended 4 EEG sessions each on separate days. Classifiers were trained using time-domain data in successive timewindows of various sizes. Results were the most accurate when a window of about 80 ms was used. By integrating the scores from individual classifiers, the maximum mean binary classification rates improved to 70% (10 trials) and 77% (20 trials). To assess how well perceptual distances among the vowels were reflected in our results, discriminability indices (d ) were computed using both the behavioral results in a screening test and the classification results. It was found that the two set of indices were significantly correlated. The pair that was the most (least) discriminable behaviorally was also the most (least) classifiable neurally. Our results support the use of classification methodology for developing a neural measure of perceptual distance.

The P300 speller is a standard paradigm for brain-computer interfacing (BCI) based on electroencephalography (EEG). It exploits the fact that the user's selective attention to a target stimulus among a random sequence of stimuli enhances the magnitude of the P300 evoked potential. The present study questions the necessity of using random sequences of stimulation. In two types of experimental runs, subjects attended to a target stimulus while the stimuli, four in total, were each intensified twelve times, in either random order or deterministic order. The 32-channel EEG data were analyzed offline using linear discriminant analysis (LDA). Similar classification accuracies of 95.3% and 93.2% were obtained for the random and deterministic runs, respectively, using the data associated with 3 sequences of stimulation. Furthermore, using a montage of 5 posterior electrodes, the two paradigms attained identical accuracy of 92.4%. These results suggest that: (a) the use of random sequences is not necessary for effective BCI performance; and (b) deterministic sequences can be used in some BCI speller applications.

We propose a complex generalization of Schoenberg's cardinal splines. To this end, we go back to the Fourier domain definition of the B-splines and extend it to complex-valued degrees. We show that the resulting complex B-splines are piecewise modulated polynomials, and that they retain most of the important properties of the classical ones: smoothness, recurrence, and two-scale relations, Riesz basis generator, explicit formulae for derivatives, including fractional orders, etc. We also show that they generate multiresolution analyses of L2(R) and that they can yield wavelet bases. We characterize the decay of these functions which are no-longer compactly supported when the degree is not an integer. Finally, we prove that the complex B-splines converge to modulated Gaussians as their degree increases, and that they are asymptotically optimally localized in the time-frequency plane in the sense of Heisenberg's uncertainty principle.

B-spline multiresolution analyses have proven to be an adequate tool for signal analysis. But for some applications, e.g. in speech processing and digital holography, complex-valued scaling functions and wavelets are more favourable than real ones, since they allow to deduce the crucial phase information.

In this talk, we extend the classical resp. fractional B-spline approach to complex B-splines. We perform this by choosing a complex exponent, i.e., a complex order z of the B-spline, and show that this does not influence the basic properties such as smothness and decay, recurrence relations and others. Moreover, the resulting complex B-splines satisfy a two-scale relation and generate a multiresolution analysis of L2(R). We show that the complex B-splines as well as the corresponding wavelets converge to Gabor functions as ℜ(z) increases and ℑ(z) is fixed. Thus they are approximately optimally time-frequency localized.

We present complex rotation-covariant multiresolution families aimed for image analysis. Since they are complex-valued functions, they provide the important phase information, which is missing in the discrete wavelet transform with real wavelets.

Our basis elements have nice properties in Hilbert space such as smoothness of fractional order α ∈ R+. The corresponding filters allow a FFT-based implementation and thus provide a fast algorithm for the wavelet transform.

We consider shift-invariant multiresolution spaces generated by rotation-covariant functions ρ in ℝ2. To construct corresponding scaling and wavelet functions, ρ has to be localized with an appropriate multiplier, such that the localized version is an element of L2(ℝ2). We consider several classes of multipliers and show a new method to improve regularity and decay properties of the corresponding scaling functions and wavelets. The wavelets are complex-valued functions, which are approximately rotation-covariant and therefore behave as Wirtinger differential operators. Moreover, our class of multipliers gives a novel approach for the construction of polyharmonic B-splines with better polynomial reconstruction properties.

This paper deals with the estimation of a deformation that describes the geometric transformation between two images. To solve this problem, we propose a novel framework that relies upon the brightness consistency hypothesis - a pixel's intensity is maintained throughout the transformation. Instead of assuming small distortion and linearising the problem (e.g. via Taylor Series expansion), we propose to interpret the brightness hypothesis as an all-pass filtering relation between the two images. The key advantages of this new interpretation are that no restrictions are placed on the amplitude of the deformation or on the spatial variations of the images. Moreover, by converting the all-pass filtering to a linear forward-backward filtering relation, our solution to the estimation problem equates to solving a linear system of equations, which leads to a highly efficient implementation. Using this framework, we develop a fast algorithm that relates one image to another, on a local level, using an all-pass filter and then extracts the deformation from the filter--hence the name "Local All-Pass" (LAP) algorithm. The effectiveness of this algorithm is demonstrated on a variety of synthetic and real deformations that are found in applications such as image registration and motion estimation. In particular, the LAP obtains very accurate results for significantly reduced computation time when compared to a selection of image registration algorithms and is very robust to noise corruption.

Recently, sampling theory has been broadened to include a class of non-bandlimited signals that possess finite rate of innovation (FRI). In this paper, we consider the problem of determining the minimum rate of innovation (RI) in a noisy setting. First, we adapt a recent model-fitting algorithm for FRI recovery and demonstrate that it achieves the Cramér-Rao bounds. Using this algorithm, we then present a framework to estimate the minimum RI based on fitting the sparsest model to the noisy samples whilst satisfying a mean squared error (MSE) criterion - a signal is recovered if the output MSE is less than the input MSE. Specifically, given a RI, we use the MSE criterion to judge whether our model-fitting has been a success or a failure. Using this output, we present a Dichotomic algorithm that performs a binary search for the minimum RI and demonstrate that it obtains a sparser RI estimate than an existing information criterion approach.

The optical flow is a velocity field that describes the motion of pixels within a sequence (or set) of images. Its estimation plays an important role in areas such as motion compensation, object tracking and image registration. In this paper, we present a novel framework to estimate the optical flow using local all-pass filters. Instead of using the optical flow equation, the framework is based on relating one image to another, on a local level, using an all-pass filter and then extracting the optical flow from the filter. Using this framework, we present a fast novel algorithm for estimating a smoothly varying optical flow, which we term the Local All-Pass (LAP) algorithm. We demonstrate that this algorithm is consistent and accurate, and that it outperforms three state-of-the-art algorithms when estimating constant and smoothly varying flows. We also show initial competitive results for real images.

Recently, classical sampling theory has been broadened to include a class of non-bandlimited signals that possess finite rate of innovation (FRI). In this paper we consider the reconstruction of a periodic stream of Diracs from noisy samples. We demonstrate that its noiseless FRI samples can be represented as a ratio of two polynomials. Using this structure as a model, we propose recovering the FRI signal using a model fitting approach rather than an annihilation method. We present an algorithm that fits this model to the noisy samples and demonstrate that it has low computation cost and is more reliable than two state-of-the-art methods.

Fast and accurate motion estimation is an important tool in biomedical imaging applications such as motion compensation and image registration. In this paper, we present a novel algorithm to estimate motion in volumetric images based on the recently developed Local All-Pass (LAP) optical flow framework. The framework is built upon the idea that any motion can be regarded as a local rigid displacement and is hence equivalent to all-pass filtering. Accordingly, our algorithm aims to relate two images, on a local level, using a 3D all-pass filter and then extract the local motion flow from the filter. As this process is based on filtering, it can be efficiently repeated over the whole image volume allowing fast estimation of a dense 3D motion. We demonstrate the effectiveness of this algorithm on both synthetic motion flows and in-vivo MRI data involving respiratory motion. In particular, the algorithm obtains greater accuracy for significantly reduced computation time when compared to competing approaches.

Compression of ECG (electrocardiogram) as a signal with finite rate of innovation (FRI) is proposed in this paper. By modelling the ECG signal as the sum of bandlimited and nonuniform linear spline which contains finite rate of innovation (FRI), sampling theory is applied to achieve effective compression and reconstruction of ECG signal. The simulation results show that the performance of the algorithm is quite satisfactory in preserving the diagnostic information as compared to the classical sampling scheme which uses the sinc interpolation.

We present an explicit formula for spline kernels; these are defined as the convolution of several B-splines of variable widths h and degrees n. The spline kernels are useful for continuous signal processing algorithms that involve B-spline inner-products or the convolution of several spline basis functions. We apply our results for the derivation of spline-based algorithms for two classes of problems. The first is the resizing of images with arbitrary scaling factors. The second problem is the computation of the Radon transform and of its inverse; in particular, we present a new spline-based version of the filtered backprojection algorithm for tomographic reconstruction. In both case, our explicit kernel formula allows for the use high degree splines; these offer better approximation and performance than the conventional lower order formulations (e.g., piecewise constant or piecewise linear models).

We present a simple but generalized interpolation method for digital images that uses multiwavelet-like basis functions. Most of interpolation methods uses only one symmetric basis function; for example, standard and shifted piecewise-linear interpolations use the “hat” function only. The proposed method uses q different multiwavelet-like basis functions. The basis functions can be dissymmetric but should preserve the “partition of unity” property for high-quality signal interpolation. The scheme of decomposition and reconstruction of signals by the proposed basis functions can be implemented in a filterbank form using separable IIR implementation. An important property of the proposed scheme is that the prefilters for decomposition can be implemented by FIR filters. Recall that the shifted-linear interpolation requires IIR prefiltering, but we find a new configuration which reaches almost the same quality with the shifted-linear interpolation, while requiring FIR prefiltering only. Moreover, the present basis functions can be explicitly formulated in time-domain, although most of (multi-)wavelets don't have a time-domain formula. We specify an optimum configuration of interpolation parameters for image interpolation, and validate the proposed method by computing PSNR of the difference between multi-rotated images and their original version.

This paper presents an interpolation method based on shifted versions of two piecewise linear generators, which provides approximation order 2 like usual piecewise-linear interpolation; i.e., this method is able to represent the constant and the ramp exactly.

Our interpolation is characterized by two real parameters: τ, the location of the generators, and α, related to their dissymmetry. By varying these parameters, we show that it is possible to optimize the quality of the approximation, independently of the function to interpolate. We recover the optimal value of shifted-linear interpolation (τ = 0.21 and α = 1) which requires IIR prefiltering, but we also find a new configuration (τ = 0.21 and α = 0.58) which reaches almost the same quality, while requiring FIR filtering only. This new solution is able to greatly reduce the amount of Gibbs oscillations generated in the shifted-linear interpolation scheme.

We validate our finding by computing the PSNR of the difference between multi-rotated images and their original version.

In this paper, we present different solutions for improving spline-based snakes. First, we demonstrate their minimum curvature interpolation property, and use it as an argument to get rid of the explicit smoothness constraint. We also propose a new external energy obtained by integrating a non-linearly pre-processed image in the closed region bounded by the curve. We show that this energy, besides being efficiently computable, is sufficiently general to include the widely used gradient-based schemes, Bayesian schemes, their combinations and discriminant-based approaches. We also introduce two initialization modes and the appropriate constraint energies.

We use these ideas to develop a general snake algorithm to track boundaries of closed objects, with a user-friendly interface.

Parametric active contour models are one of the preferred approaches for image segmentation because of their computational efficiency and simplicity. However, they have a few drawbacks which limit their performance. In this paper, we identify some of these problems and propose efficient solutions to get around them. The widely-used gradient magnitude-based energy is parameter dependent; its use will negatively affect the parametrization of the curve and, consequently, its stiffness. Hence, we introduce a new edge-based energy that is independent of the parameterization. It is also more robust since it takes into account the gradient direction as well. We express this energy term as a surface integral, thus unifying it naturally with the region-based schemes. The unified framework enables the user to tune the image energy to the application at hand. We show that parametric snakes can guarantee low curvature curves, but only if they are described in the curvilinear abscissa. Since normal curve evolution do not ensure constant arc-length, we propose a new internal energy term that will force this configuration. The curve evolution can sometimes give rise to closed loops in the contour, which will adversely interfere with the optimization algorithm. We propose a curve evolution scheme that prevents this condition.

We introduce a 3-D parametric active contour algorithm for the shape estimation of DNA molecules from stereo cryo-electron micrographs. We consider a 3-D filament (consisting of a B-spline skeleton and a specified radial profile) and match its projections with the micrographs using an optimization algorithm. To accelerate the evaluation of the projections, we approximate the global model locally by an elongated blob-like template that is designed to be projection-steerable. This means that the 2-D projections of the template at any 3-D orientation can be expressed as a linear combination of a few basis functions. Thus, the matching of the template projections is reduced to evaluating a weighted sum of the inner-products between the basis functions and the micrographs.

We choose an internal energy term that penalizes the total curvature magnitude of the curve. We also use a constraint energy term that forces the curve to have a specified length. The sum of these terms along with the image energy obtained from the matching process is minimized using a conjugate-gradient algorithm. We validate the algorithm using real as well as simulated data.

We present an exact expression for the L2 error that occurs when one approximates a periodic signal in a basis of shifted and scaled versions of a generating function. This formulation is applicable to a wide variety of linear approximation schemes including wavelets, splines, and bandlimited signal expansions. The formula takes the simple form of a Parseval's-like relation, where the Fourier coefficients of the signal are weighted against a frequency kernel that characterizes the approximation operator. We use this expression to analyze the behavior of the error as the sampling step approaches zero. We also experimentally verify the expression of the error in the context of the interpolation of closed curves.

We propose an algorithm for the 3-D reconstruction of DNA filaments from a pair of stereo cryo-electron micrographs. The underlying principle is to specify a 3-D model of a filament—described as a spline curve—and to fit it to the 2-D data using a snake-like algorithm. To drive the snake, we constructed a ridge-enhancing vector field for each of the images based on the maximum output of a bank of rotating matched filters. The magnitude of the field gives a confidence measure for the presence of a filament and the phase indicates its direction. We also propose a fast algorithm to perform the matched filtering. The snake algorithm starts with an initial curve (input by the user) and evolves it so that its projections on the viewing plane are in maximal agreement with the corresponding vector fields.

We present a method for the exact computation of the moments of a region bounded by a curve represented by a scaling function or wavelet basis. Using Green's Theorem, we show that the computation of the area moments is equivalent to applying a suitable multidimensional filter on the coefficients of the curve and thereafter computing a scalar product. The multidimensional filter coefficients are precomputed exactly as the solution of a two-scale relation. To demonstrate the performance improvement of the new method, we compare it with existing methods such as pixel-based approaches and approximation of the region by a polygon. We also propose an alternate scheme when the scaling function is sinc(x).

We analyze the representation of periodic signals in a scaling function basis. This representation is sufficiently general to include the widely used approximation schemes like wavelets, splines and Fourier series representation. We derive a closed form expression for the approximation error in the scaling function representation. The error formula takes the simple form of a Parseval like sum, weighted by an appropriate error kernel. This formula may be useful in choosing the right representation for a class of signals. We also experimentally verify the theory in the particular case of description of closed curves.

We present an exact algorithm for the computation of the moments of a region bounded by a curve represented in a scaling function or wavelet basis. Using Green's theorem, we show that the computation of the area moments is equivalent to applying a suitable multidimensional filter on the coefficients of the curve and thereafter computing a scalar product. We compare this algorithm with existing methods such as pixel-based approaches and approximation of the region by a polygon.

We introduce a three-dimensional (3-D) parametric active contour algorithm for the shape estimation of DNA molecules from stereo cryo-electron micrographs. We estimate the shape by matching the projections of a 3-D global shape model with the micrographs; we choose the global model as a 3-D filament with a B-spline skeleton and a specified radial profile. The active contour algorithm iteratively updates the B-spline coefficients, which requires us to evaluate the projections and match them with the micrographs at every iteration. Since the evaluation of the projections of the global model is computationally expensive, we propose a fast algorithm based on locally approximating it by elongated blob-like templates. We introduce the concept of projection-steerability and derive a projection-steerable elongated template. Since the two-dimensional projections of such a blob at any 3-D orientation can be expressed as a linear combination of a few basis functions, matching the projections of such a 3-D template involves evaluating a weighted sum of inner products between the basis functions and the micrographs. The weights are simple functions of the 3-D orientation and the inner-products are evaluated efficiently by separable filtering. We choose an internal energy term that penalizes the average curvature magnitude. Since the exact length of the DNA molecule is known a priori, we introduce a constraint energy term that forces the curve to have this specified length. The sum of these energies along with the image energy derived from the matching process is minimized using the conjugate gradients algorithm. We validate the algorithm using real, as well as simulated, data and show that it performs well.

Purpose: To develop a motion correction for Positron-Emission-Tomography (PET) using simultaneously acquired magnetic-resonance (MR) images within 90 seconds.
Methods: A 90 seconds MR acquisition allows the generation of a cardiac and respiratory motion model of the body trunk. Thereafter, further diagnostic MR sequences can be recorded during the PET examination without any limitation. To provide full PET scan time coverage, a sensor fusion approach maps external motion signals (respiratory belt, ECG-derived respiration signal) to a complete surrogate signal on which the retrospective data binning is performed. A joint Compressed Sensing reconstruction and motion estimation of the subsampled data provides motion-resolved MR images (respiratory + cardiac). A 1-POINT DIXON method is applied to these MR images to derive a motion-resolved attenuation map. The motion model and the attenuation map are fed to the Customizable and Advanced Software for Tomographic Reconstruction (CASToR) PET reconstruction system in which the motion correction is incorporated. All reconstruction steps are performed online on the scanner via Gadgetron to provide a clinically feasible setup for improved general applicability. The method was evaluated on 36 patients with suspected liver or lung metastasis in terms of lesion quantification (SUVmax, SNR, contrast), delineation (FWHM, slope steepness) and diagnostic confidence level (3-point Likert-scale).
Results: A motion correction could be conducted for all patients, however, only in 30 patients moving lesions could be observed. For the examined 134 malignant lesions, an average improvement in lesion quantification of 22%, delineation of 64% and diagnostic confidence level of 23% was achieved.
Conclusion: The proposed method provides a clinically feasible setup for respiratory and cardiac motion correction of PET data by simultaneous short-term MRI. The acquisition sequence and all reconstruction steps are publicly available to foster multi-center studies and various motion correction scenarios.

Analytic sensing is a new mathematical framework to estimate the parameters of a multi-dipole source model from boundary measurements. The method deploys two working principles. First, the sensing principle relates the boundary measurements to the volumetric interactions of the sources with the so-called "analytic sensor," a test function that is concentrated around a singular point outside the domain of interest. Second, the annihilation principle allows retrieving the projection of the dipoles' positions in a single shot by polynomial root finding. Here, we propose to apply analytic sensing in a local way; i.e., the poles are not surrounding the complete domain. By combining two local projections of the (nearby) dipolar sources, we are able to reconstruct the full 3-D information. We demonstrate the feasibility of the proposed approach for both synthetic and experimental data, together with the theoretical lower bounds of the localization error.

Source localization from EEG surface measurements is an important problem in neuro-imaging. We propose a new mathematical framework to estimate the parameters of a multidipole source model. To that aim, we perform 2-D analytic sensing in multiple planes. The estimation of the projection on each plane of the dipoles' positions, which is a non-linear problem, is reduced to polynomial root finding. The 3-D information is then recovered as a special case of tomographic reconstruction. The feasibility of the proposed approach is shown for both synthetic and experimental data.

Source imaging maps back boundary measurements to underlying generators within the domain; e.g., retrieving the parameters of the generating dipoles from electrical potential measurements on the scalp such as in electroencephalography (EEG). Fitting such a parametric source model is non-linear in the positions of the sources and renewed interest in mathematical imaging has led to several promising approaches.
One important step in these methods is the application of a sensing principle that links the boundary measurements to volumetric information about the sources. This principle is based on the divergence theorem and a mathematical test function that needs to be an homogeneous solution of the governing equations (i.e., Poisson's equation). For a specific choice of the test function, we have devised an algebraic non-iterative source localization technique for which we have coined the term "analytic sensing".
Until now, this sensing principle has been applied to homogeneous-conductivity spherical models only. Here, we extend it for multi-layer spherical models that are commonly applied in EEG. We obtain a closed-form expression for the test function that can then be applied for subsequent localization. A simulation study show the feasibility of the proposed approach.

We consider the problem of locating point sources in the planar domain from overdetermined boundary measurements of solutions of Poisson's equation. In this paper, we propose a novel technique, termed "analytic sensing," which combines the application of Green's theorem to functions with vanishing Laplacian—known as the "reciprocity gap" principle—with the careful selection of analytic functions that "sense" the manifestation of the sources in order to determine their positions and intensities. Using this formalism we express the problem at hand as a generalized sampling problem, where the signal to be reconstructed is the source distribution. To determine the positions of the sources, which is a nonlinear problem, we extend the annihilating-filter method, which reduces the problem to solving a linear system of equations for a polynomial whose roots are the positions of the point sources. Once these positions are found, resolving the according intensities boils down to solving a linear system of equations. We demonstrate the performance of our technique in the presence of noise by comparing the achieved accuracy with the theoretical lower bound provided by Cramér-Rao theory.

Inverse problems play an important role in engineering. A problem that often occurs in electromagnetics (e.g. EEG) is the estimation of the locations and strengths of point sources from boundary data.

We propose a new technique, for which we coin the term “analytic sensing”. First, generalized measures are obtained by applying Green's theorem to selected functions that are analytic in a given domain and at the same time localized to “sense” the sources. Second, we use the finite-rate-of-innovation framework to determine the locations of the sources. Hence, we construct a polynomial whose roots are the sources' locations. Finally, the strengths of the sources are found by solving a linear system of equations. Preliminary results, using synthetic data, demonstrate the feasibility of the proposed method.

We build wavelet-like functions based on a parametrized family of pseudo-differential operators Lv that satisfy some admissibility and scalability conditions. The shifts of the generalized B-splines, which are localized versions of the Green function of Lv, generate a family of L-spline spaces. These spaces have the approximation order equal to the order of the underlying operator. A sequence of embedded spaces is obtained by choosing a dyadic scale progression a = 2i. The consecutive inclusion of the spaces yields the refinement equation, where the scaling filter depends on scale. The generalized L-wavelets are then constructed as basis functions for the orthogonal complements of spline spaces. The vanishing moment property of conventional wavelets is generalized to the vanishing null space element property. In spite of the scale dependence of the filters, the wavelet decomposition can be performed using an adapted version of Mallat's filterbank algorithm.

Probably the most important property of wavelets for signal processing is their multiscale derivative-like behavior when applied to functions. In order to extend the class of problems that can profit of wavelet-based techniques, we propose to build new families of wavelets that behave like an arbitrary scale-covariant operator. Our extension is general and includes many known wavelet bases. At the same time, the method takes advantage a fast filterbank decomposition-reconstruction algorithm. We give necessary conditions for the scale-covariant operator to admit our wavelet construction, and we provide examples of new wavelets that can be obtained with our method.

We formulate the tomographic reconstruction problem in a variational setting. The object to be reconstructed is considered as a continuous density function, unlike in the pixel-based approaches. The measurements are modeled as linear operators (Radon transform), integrating the density function along the ray path. The criterion that we minimize consists of a data term and a regularization term. The data term represents the inconsistency between applying the measurement model to the density function and the real measurements. The regularization term corresponds to the smoothness of the density function.

We show that this leads to a solution lying in a finite dimensional vector space which can be expressed as a linear combination of generating functions. The coefficients of this linear combination are determined from a linear equation set, solvable either directly, or by using an iterative approach.

Our experiments show that our new variational method gives results comparable to the classical filtered back-projection for high number of measurements (projection angles and sensor resolution). The new method performs better for medium number of measurements. Furthermore, the variational approach gives usable results even with very few measurements when the filtered back-projection fails. Our method reproduces amplitudes more faithfully and can cope with high noise levels; it can be adapted to various characteristics of the acquisition device.

The variational reconstruction theory from a companion paper finds a solution consistent with some linear constraints and minimizing a quadratic plausibility criterion. It is suitable for treating vector and multidimensional signals. Here, we apply the theory to a generalized sampling system consisting of a multichannel filterbank followed by a nonuniform sampling. We provide ready-made formulas, which should permit application of the technique directly to problems at hand.

We comment on the practical aspects of the method, such as numerical stability and speed. We show the reconstruction formula and apply it to several practical examples, including new variational formulation of derivative sampling, landmark warping, and tomographic reconstruction.

We consider the problem of reconstructing a multidimensional vector function fin: ℜm → ℜn from a finite set of linear measures. These can be irregularly sampled responses of several linear filters. Traditional approaches reconstruct in an a priori given space, e.g., the space of bandlimited functions. Instead, we have chosen to specify a reconstruction that is optimal in the sense of a quadratic plausibility criterion J. First, we present the solution of the generalized interpolation problem. Later, we also consider the approximation problem, and we show that both lead to the same class of solutions.

Imposing generally desirable properties on the reconstruction largely limits the choice of the criterion J. Linearity leads to a quadratic criterion based on bilinear forms. Specifically, we show that the requirements of translation, rotation, and scale-invariance restrict the form of the criterion to essentially a one-parameter family. We show that the solution can be obtained as a linear combination of generating functions. We provide analytical techniques to find these functions and the solution itself. Practical implementation issues and examples of applications are treated in a companion paper.

We consider the problem of reconstructing a multidimensional and multivariate function ƒ: ℜm → ℜn from the discretely and irregularly sampled responses of q linear shift-invariant filters. Unlike traditional approaches which reconstruct the function in some signal space V, our reconstruction is optimal in the sense of a plausibility criterion J. The reconstruction is either consistent with the measures, or minimizes the consistence error. There is no band-limiting restriction for the input signals. We show that important characteristics of the reconstruction process are induced by the properties of the criterion J. We give the reconstruction formula and apply it to several practical cases.

Fractional sample interpolation with FIR filters is commonly used for motion compensated prediction (MCP). The FIR filtering can be viewed as a signal decomposition using restricted basis functions. The concept of generalized interpolation provides a greater degree of freedom for selecting basis functions. We implemented generalized interpolation using a combination of short IIR and FIR filters. An efficient multiplication-free design of the algorithm that is suited for hardware implementation is shown. Compared to a 6-tap FIR interpolation filter, average rate savings of 3.1% are observed. A detailed analysis of the complexity and memory bandwidth cycles compared to existing interpolation techniques for MCP is provided.

OCT performs high-resolution, cross-sectional tomographic imaging of the internal structure in materials and biological systems by measuring the coherent part of the reflected light. The physical depth resolution in OCT depends on the coherence length of the light source and lies around 10-15μm. The new parametric super-resolution method described in this paper does not depend on the coherence length of the light source, but rather on the noise level of the measurement.

The key idea is to describe the OCT measure of a multi layer sample by a parametric model containing the location of the layer and its amplitude. We then find these parameters by minimizing the distance between the model and measure.

Accurate and robust spot tracking is a necessary tool for quantitative motion analysis in fluorescence microscopy images. Few trackers however consider the underlying dynamics present in biological systems. For example, the collective motion of cells often exhibits both fast dynamics, i.e. Brownian motion, and slow dynamics, i.e. time-invariant stationary motion. In this paper, we propose a novel, multi-frame, tracker that exploits this stationary motion. More precisely, we first estimate the stationary motion and then use it to guide the spot tracker. We obtain the stationary motion by adapting a recent optical flow algorithm that relates one image to another locally using an all-pass filter. We perform this operation over all the image frames simultaneously and estimate a single, stationary optical flow. We compare the proposed tracker with two existing techniques and show that our approach is more robust to high noise and varying structure. In addition, we also show initial experiments on real microscopy images.

We propose a non-iterative image deconvolution algorithm for data corrupted by Poisson or mixed Poisson-Gaussian noise. Many applications involve such a problem, ranging from astronomical to biological imaging. We parametrize the deconvolution process as a linear combination of elementary functions, termed as linear expansion of thresholds (LET). This parametrization is then optimized by minimizing a robust estimate of the true mean squared error, the Poisson unbiased risk estimate (PURE). Each elementary function consists of a Wiener filtering followed by a pointwise thresholding of undecimated Haar wavelet coefficients. In contrast to existing approaches, the proposed algorithm merely amounts to solving a linear system of equations which has a fast and exact solution. Simulation experiments over different types of convolution kernels and various noise levels indicate that the proposed method outperforms state-of-the-art techniques, in terms of both restoration quality and computational complexity. Finally, we present some results on real confocal fluorescence microscopy images, and demonstrate the potential applicability of the proposed method for improving the quality of these images.

Three-dimensional (3D) deconvolution microscopy is very effective in improving the quality of fluorescence microscopy images. In this work, we present an efficient approach for the deconvolution of 3D fluorescence microscopy images based on the recently developed PURE-LET algorithm. By combining multiple Wiener filtering and wavelet denoising, we parametrize the deconvolution process as a linear combination of elementary functions. Then the Poisson unbiased risk estimate (PURE) is used to obtain the optimal coefficients. The proposed approach is non-iterative and outperforms existing techniques (usually, variants of Richardson-Lucy algorithm) both in terms of computational efficiency and quality. We illustrate its effectiveness on both synthetic and real data.

We propose a non-iterative image deconvolution algorithm for data corrupted by Poisson noise. Many applications involve such a problem, ranging from astronomical to biological imaging. We parametrize the deconvolution process as a linear combination of elementary functions, termed as linear expansion of thresholds (LET). This parametrization is then optimized by minimizing a robust estimate of the mean squared error, the "Poisson unbiased risk estimate (PURE)''. Each elementary function consists of a Wiener filtering followed by a pointwise thresholding of undecimated Haar wavelet coefficients. In contrast to existing approaches, the proposed algorithm merely amounts to solving a linear system of equations which has a fast and exact solution. Simulation experiments over various noise levels indicate that the proposed method outperforms current state-of-the-art techniques, in terms of both restoration quality and computational time.

The point-spread function (PSF) plays a fundamental role in fluorescence microscopy. A realistic and accurately calculated PSF model can significantly improve the performance in 3D deconvolution microscopy and also the localization accuracy in single-molecule microscopy. In this work, we propose a fast and accurate approximation of the Gibson-Lanni model, which has been shown to represent the PSF suitably under a variety of imaging conditions. We express the Kirchhoff's integral in this model as a linear combination of rescaled Bessel functions, thus providing an integral-free way for the calculation. The explicit approximation error in terms of parameters is given numerically. Experiments demonstrate that the proposed approach results in a significantly smaller computational time compared with current state-ofthe- art techniques to achieve the same accuracy. This approach can also be extended to other microscopy PSF models.

Blur estimation is critical to blind image deconvolution. In this work, by taking Gaussian kernel as an example, we propose an approach to estimate the blur size for photon-limited images. This estimation is based on the minimization of a novel criterion, blur-PURE (Poisson unbiased risk estimate), which makes use of the Poisson noise statistics of the measurement. Experimental results demonstrate the effectiveness of the proposed method in various scenarios. This approach can be then plugged into our recent PURE-LET deconvolution algorithm, and an example on real fluorescence microscopy is presented.

This paper consider the problem of relaxing the spatial coverage requirement on the mechanical rotation in the direction finding (DF) approach based on the received power measurements from single antenna pointing to different directions. Under incomplete spatial coverage, we show that the least square (LS) solution used to transform the problem into its spectral form is no longer accurate due to its ill-conditioned system matrix. To overcome this, we propose an approach based on spatial remodeling of the spatial power measurements such that its spatial periodicity can be adjusted according to the spatial coverage. The approach also incorporates the Tikhonov regularization in calculating the LS solution based on the new system matrix. Upon arriving at the new spectral form, the Cadzow annihilating filter method can then be used to estimate the direction-of- arrival. Both simulation and experimental results are presented to show the efficacy of the proposed method.

This paper considers the problem of extending the single antenna power measurements based direction finding to the two-dimensional (2D) case, and proposes a method to estimate the azimuth-elevation direction-of-arrival (DOA) from a matrix of received power. Exploiting the fact that the azimuth-elevation antenna pattern is 2D bandlimited, the problem can be transformed into a 2D spectral analysis problem. The proposed method first decomposes the 2D spectral analysis problem into one-dimensional case and then solved them independently. As the solution does not ensure that the estimated azimuth and elevation is in correct order, the solution is subjected to permutation ambiguity. This can then be solved by finding the permutation that best matches the 2D spectral representation. Simulation results demonstrating the high-resolution capability of the proposed method in two-source case and the effectiveness in five-source case are also presented in this paper.

In this paper, the problem of estimating direction-of-arrival (DOA) of multiple uncorrelated sources from single antenna power measurements is addressed. Utilizing the fact that the antenna pattern is bandlimited and can be modeled as a finite sum of complex exponentials, we first show that the problem can be transformed into a frequency estimation problem. Then, we explain how the annihilating filter method can be used to solve for the DOA in the noiseless case. In the presence of noise, we propose to use Cadzow denoising that is formulated as an iterative algorithm derived from exploiting the matrix rank and linear structure properties. Furthermore, we have also derived the Cramér-Rao Bound (CRB) and reviewed several alternative approaches that can be used as a comparison to the proposed approach. From the simulation and experimental results, we demonstrate that the proposed approach significantly outperforms other approaches. It is also evident from the Monte Carlo analysis that the proposed approach converges to the CRB.

We present a numerical two-step reconstruction procedure for digital off-axis Fresnel holograms. First, we retrieve the amplitude and phase of the object wave in the CCD plane. For each point we solve a weighted linear set of equations in the least-squares sense. The algorithm has O(N) complexity and gives great flexibility. Second, we numerically propagate the obtained wave to achieve proper focus. We apply the method to microscopy and demonstrate its suitability for the real time imaging of biological samples.

We present a new method for reconstructing digitally recorded off-axis Fresnel holograms. Currently-used reconstruction methods are based on the simulation and propagation of a reference wave that is diffracted by the hologram. This procedure introduces a twin-image and a zero-order term which are inherent to the diffraction phenomenon. These terms perturb the reconstruction and limit the field of view. Our new approach splits the reconstruction process into two parts. First, we recover the amplitude and the phase in the camera plane from the measured hologram intensity. Our algorithm is based on the hypothesis of a slowly varying object wave which interferes with a more rapidly varying reference wave. In a second step, we propagate this complex wave to refocus it using the Fresnel transform. We therefore avoid the presence of the twin-image and zero-order interference terms. This new approach is flexible and can be adapted easily to complicated experimental setups. We demonstrate its feasibility in the case of digital holographic microscopy and present results for the imaging of living neurons.

We present a new digital two-step reconstruction method for off-axis holograms recorded on a CCD camera. First, we retrieve the complex object wave in the acquisition plane from the hologram's samples. In a second step, if required, we propagate the wave front by using a digital Fresnel transform to achieve proper focus. This algorithm is sufficiently general to be applied to sophisticated optical setups that include a microscope objective. We characterize and evaluate the algorithm by using simulated data sets and demonstrate its applicability to real-world experimental conditions by reconstructing optically acquired holograms.

We propose a construction of new wavelet-like bases that are well suited for the reconstruction and processing of optically generated Fresnel holograms recorded on CCD-arrays. The starting point is a wavelet basis of L2 to which we apply a unitary Fresnel transform. The transformed basis functions are shift-invariant on a level-by-level basis but their multiresolution properties are governed by the special form that the dilation operator takes in the Fresnel domain. We derive a Heisenberg-like uncertainty relation that relates the localization of Fresnelets with that of their associated wavelet basis. According to this criterion, the optimal functions for digital hologram processing turn out to be Gabor functions, bringing together two separate aspects of the holography inventor's work.

We give the explicit expression of orthogonal and semi-orthogonal Fresnelet bases corresponding to polynomial spline wavelets. This special choice of Fresnelets is motivated by their near-optimal localization properties and their approximation characteristics. We then present an efficient multiresolution Fresnel transform algorithm, the Fresnelet transform. This algorithm allows for the reconstruction (backpropagation) of complex scalar waves at several user-defined, wavelength-independent resolutions. Furthermore, when reconstructing numerical holograms, the subband decomposition of the Fresnelet transform naturally separates the image to reconstruct from the unwanted zero-order and twin image terms. This greatly facilitates their suppression. We show results of experiments carried out on both synthetic (simulated) data sets as well as on digitally acquired holograms.

We present a zero-order and twin image elimination algorithm for digital Fresnel holograms that were acquired in an off-axis geometry. These interference terms arise when the digital hologram is reconstructed and corrupt the result. Our algorithm is based on the Fresnelet transform, a wavelet-like transform that uses basis functions tailor-made for digital holography. We show that in the Fresnelet domain, the coefficients associated to the interference terms are separated both spatially and with respect to the frequency bands. We propose a method to suppress them by selectively thresholding the Fresnelet coefficients. Unlike other methods that operate in the Fourier domain and affect the whole spacial domain, our method operates locally in both space and frequency, allowing for a more targeted processing.

We present a new class of wavelet bases—Fresnelets—which is obtained by applying the Fresnel transform operator to a wavelet basis of L2. The thus constructed wavelet family exhibits properties that are particularly useful for analyzing and processing optically generated holograms recorded on CCD-arrays.

We first investigate the multiresolution properties (translation, dilation) of the Fresnel transform that are needed to construct our new wavelet. We derive a Heisenberg-like uncertainty relation that links the localization of the Fresnelets with that of the original wavelet basis. We give the explicit expression of orthogonal and semi-orthogonal Fresnelet bases corresponding to polynomial spline wavelets. We conclude that the Fresnel B-splines are particularly well suited for processing holograms because they tend to be well localized in both domains.

We propose a vector/matrix extension of our denoising algorithm initially developed for grayscale images, in order to efficiently process multichannel (e.g., color) images. This work follows our recently published SURE-LET approach where the denoising algorithm is parameterized as a linear expansion of thresholds (LET) and optimized using Stein's unbiased risk estimate (SURE). The proposed wavelet thresholding function is pointwise and depends on the coefficients of same location in the other channels, as well as on their parents in the coarser wavelet subband. A nonredundant, orthonormal, wavelet transform is first applied to the noisy data, followed by the (subband-dependent) vector-valued thresholding of individual multichannel wavelet coefficients which are finally brought back to the image domain by inverse wavelet transform. Extensive comparisons with the state-of-the-art multiresolution image denoising algorithms indicate that despite being nonredundant, our algorithm matches the quality of the best redundant approaches, while maintaining a high computational efficiency and a low CPU/memory consumption. An online Java demo illustrates these assertions.

We propose a new orthonormal wavelet thresholding algorithm for denoising color images that are assumed to be corrupted by additive Gaussian white noise of known intercolor covariance matrix. The proposed wavelet denoiser consists of a linear expansion of thresholding (LET) functions, integrating both the interscale and intercolor dependencies. The linear parameters of the combination are then solved for by minimizing Stein's unbiased risk estimate (SURE), which is nothing but a robust unbiased estimate of the mean squared error (MSE) between the (unknown) noise-free data and the denoised one. Thanks to the quadratic form of this MSE estimate, the parameters optimization simply amounts to solve a linear system of equations.

The experimentations we made over a wide range of noise levels and for a representative set of standard color images have shown that our algorithm yields even slightly better peak signal-to-noise ratios than most state-of-the-art wavelet thresholding procedures, even when the latters are executed in an undecimated wavelet representation.

We devise a new undecimated wavelet thresholding for denoising images corrupted by additive Gaussian white noise. The first key point of our approach is the use of a linearly parameterized pointwise thresholding function. The second key point consists in optimizing the parameters globally by minimizing Stein's unbiased MSE estimate (SURE) directly in the image-domain, and not separately in the wavelet subbands.

Amazingly, our method gives similar results to the best state-of-the-art algorithms, despite using only a simple pointwise thresholding function; we demonstrate it in simulations over a wide range of noise levels for a representative set of standard grayscale images.

We use a comprehensive set of non-redundant orthogonal wavelet transforms and apply a denoising method called SUREshrink in each individual wavelet subband to denoise images corrupted by additive Gaussian white noise. We show that, for various images and a wide range of input noise levels, the orthogonal fractional (α, τ)-B-splines give the best peak signal-to-noise ratio (PSNR), as compared to standard wavelet bases (Daubechies wavelets, symlets and coiflets). Moreover, the selection of the best set (α, τ) can be performed on the MSE estimate (SURE) itself, not on the actual MSE (Oracle).

Finally, the use of complex-valued fractional B-splines leads to even more significant improvements; they also outperform the complex Daubechies wavelets.

We propose a general methodology (PURE-LET) to design and optimize a wide class of transform-domain thresholding algorithms for denoising images corrupted by mixed Poisson-Gaussian noise. We express the denoising process as a linear expansion of thresholds (LET) that we optimize by relying on a purely data-adaptive unbiased estimate of the mean-squared error (MSE), derived in a non-Bayesian framework (PURE: Poisson-Gaussian unbiased risk estimate). We provide a practical approximation of this theoretical MSE estimate for the tractable optimization of arbitrary transform-domain thresholding. We then propose a pointwise estimator for undecimated filterbank transforms, which consists of subband-adaptive thresholding functions with signal-dependent thresholds that are globally optimized in the image domain. We finally demonstrate the potential of the proposed approach through extensive comparisons with state-of-the-art techniques that are specifically tailored to the estimation of Poisson intensities.We also present denoising results obtained on real images of low-count fluorescence microscopy.

We propose an efficient orthonormal wavelet-domain video denoising algorithm based on an appropriate integration of motion compensation into an adapted version of our recently devised Stein's unbiased risk estimator-linear expansion of thresholds (SURE-LET) approach. To take full advantage of the strong spatio-temporal correlations of neighboring frames, a global motion compensation followed by a selective blockmatching is first applied to adjacent frames, which increases their temporal correlations without distorting the interframe noise statistics. Then, a multiframe interscale wavelet thresholding is performed to denoise the current central frame. The simulations we made on standard grayscale video sequences for various noise levels demonstrate the efficiency of the proposed solution in reducing additive white Gaussian noise. Obtained at a lighter computational load, our results are even competitive with most state-of-the-art redundant wavelet-based techniques. By using a cycle-spinning strategy, our algorithm is in fact able to outperform these methods.

We propose a novel algorithm for denoising Poisson-corrupted images, that performs a signal-adaptive thresholding of the undecimated Haar wavelet coefficients. A Poisson's unbiased MSE estimate is devised and adapted to arbitrary transform-domain pointwise processing. This prior-free quadratic measure of quality is then used to globally optimize a linearly parameterized subband-adaptive thresholding, which accounts for the signal-dependent noise variance. We demonstrate the qualitative and computational competitiveness of the resulting denoising algorithm through comprehensive comparisons with some state-of-the-art multiscale techniques specifically designed for Poisson intensity estimation. We also show promising denoising results obtained on low-count fluorescence microscopy images.

This paper introduces a new approach to orthonormal wavelet image denoising. Instead of postulating a statistical model for the wavelet coefficients, we directly parametrize the denoising process as a sum of elementary nonlinear processes with unknown weights. We then minimize an estimate of the mean square error between the clean image and the denoised one. The key point is that we have at our disposal a very accurate, statistically unbiased, MSE estimate—Stein's unbiased risk estimate—that depends on the noisy image alone, not on the clean one. Like the MSE, this estimate is quadratic in the unknown weights, and its minimization amounts to solving a linear system of equations. The existence of this a priori estimate makes it unnecessary to devise a specific statistical model for the wavelet coefficients. Instead, and contrary to the custom in the literature, these coefficients are not considered random anymore. We describe an interscale orthonormal wavelet thresholding algorithm based on this new approach and show its near-optimal performance—both regarding quality and CPU requirement—by comparing with the results of three state-of-the-art nonredundant denoising algorithms on a large set of test images. An interesting fallout of this study is the development of a new, group-delay-based, parent-child prediction in a wavelet dyadic tree.

We propose here a new pointwise wavelet thresholding function that incorporates inter-scale dependencies. This non-linear function depends on a set of four linear parameters per subband which are set by minimizing Stein's unbiased MSE estimate (SURE). Our approach assumes additive Gaussian white noise.

In order for the inter-scale dependencies to be faithfully taken into account, we also develop a rigorous feature alignment processing, that is adapted to arbitrary wavelet filters (e.g. non-symmetric filters).

Finally, we demonstrate the efficiency of our denoising approach in simulations over a wide range of noise levels for a representative set of standard images.

In this paper, we derive an unbiased expression for the expected mean-squared error associated with continuously differentiable estimators of the noncentrality parameter of a chi-square random variable. We then consider the task of denoising squared-magnitude magnetic resonance (MR) image data, which are well modeled as independent noncentral chi-square random variables on two degrees of freedom. We consider two broad classes of linearly parameterized shrinkage estimators that can be optimized using our risk estimate, one in the general context of undecimated filterbank transforms, and the other in the specific case of the unnormalized Haar wavelet transform. The resultant algorithms are computationally tractable and improve upon most state-of-the-art methods for both simulated and actual MR image data.

We present a fast algorithm for image restoration in the presence of Poisson noise. Our approach is based on (1) the minimization of an unbiased estimate of the MSE for Poisson noise, (2) a linear parametrization of the denoising process and (3) the preservation of Poisson statistics across scales within the Haar DWT. The minimization of the MSE estimate is performed independently in each wavelet subband, but this is equivalent to a global image-domain MSE minimization, thanks to the orthogonality of Haar wavelets. This is an important difference with standard Poisson noise-removal methods, in particular those that rely on a non-linear preprocessing of the data to stabilize the variance.
Our non-redundant interscale wavelet thresholding outperforms standard variance-stabilizing schemes, even when the latter are applied in a translation-invariant setting (cycle-spinning). It also achieves a quality similar to a state-of-the-art multiscale method that was specially developed for Poisson data. Considering that the computational complexity of our method is orders of magnitude lower, it is a very competitive alternative.
The proposed approach is particularly promising in the context of low signal intensities and/or large data sets. This is illustrated experimentally with the denoising of low-count fluorescence micrographs of a biological sample.

We propose a non-Bayesian denoising algorithm to reduce the Poisson noise that is typically dominant in fluorescence microscopy data. To process large datasets at a low computational cost, we use the unnormalized Haar wavelet transform. Thanks to some of its appealing properties, independent unbiased MSE estimates can be derived for each subband. Based on these Poisson unbiased MSE estimates, we then optimize linearly parameterized interscale thresholding. Correlations between adjacent images of the multidimensional data are accounted for through a sliding window approach. Experiments on simulated and real data show that the proposed solution is qualitatively similar to a state-of-the-art multiscale method, while being orders of magnitude faster.

Currently, the standard event-related potentials (ERP) technique consists in averaging many on-going electroencephalogram (EEG) trials using the same stimuli. Key questions are how to extract the ERP from on-going EEG with fewer average times and how to further decompose ERP into basic components related to cognitive process. In this paper we introduce a novel Blind Source Separation (BSS) approach based on a weak exclusion principle (WEP) to solve these problems. The superior aspect of this algorithm is that it is based on a deterministic principle, which is more appropriate to analyze non-stationary EEG signals than most other BSS methods based on statistical hypotheses. The results show that our BSS algorithm can quickly and effectively extract ERPs using fewer average times than the traditional averaging methods. We show that, via BSS, we can isolate two main ERP components, which are respectively related to an exogenous process and a cognitive process, and can discriminate between the occipital lobe and the frontal lobe responses from the brain, agreeing with the classical component modeling in ERPs. Single-trial ERP separation results have demonstrated the consistency of these two main ERP components. Thus, BSS based on WEP can provide a window to better understand ERP, not only in averaging behavior, but in the complexities of moment-to-moment dynamics as well.

The question of how to separate individual brain and non-brain signals, mixed by volume conduction in electroencephalographic (EEG) and other electrophysiological recordings, is a significant problem in contemporary neu- roscience. This study proposes and evaluates a novel EEG Blind Source Separation (BSS) algorithm based on a weak exclusion principle (WEP). The chief point in which it differs from most previous EEG BSS algorithms is that the proposed algorithm is not based upon the hypothesis that the sources are statistically independent. Our first step was to investigate algorithm performance on simulated signals which have ground truth. The purpose of this simulation is to illustrate the pro- posed algorithm's efficacy. The results show that the proposed algorithm has good separation performance. Then, we used the proposed algorithm to separate real EEG signals from a memory study using a revised version of Sternberg Task. The results show that the proposed algorithm can effectively separate the non-brain and brain sources.

Biometrics is a growing field, which permits identification of individuals by means of unique physical features. Electroencephalography (EEG)-based biometrics utilizes the small intra-personal differences and large inter-personal differences between individuals' brainwave patterns. In the past, such methods have used features derived from manually-designed procedures for this purpose. Another possibility is to use convolutional neural networks (CNN) to automatically extract an individual's best and most unique neural features and conduct classification, using EEG data derived from both Resting State with Open Eyes (REO) and Resting State with Closed Eyes (REC). Results indicate that this CNN-based joint-optimized EEG-based Biometric System yields a high degree of accuracy of identification (88%) for 10-class classification. Furthermore, rich inter-personal difference can be found using a very low frequency band (0-2Hz). Additionally, results suggest that the temporal portions over which subjects can be individualized is less than 200 ms.

Modeling the acoustical process of soft biological tissue imaging and understanding the consequences of the approximations required by such modeling are key steps for accurately simulating ultrasonic scanning as well as estimating the scattering coefficient of the imaged matter.

In this document, a linear solution to the inhomogeneous ultrasonic wave equation is proposed. The classical assumptions required for linearization are applied; however, no approximation is made in the mathematical development regarding density and speed of sound. This leads to an expression of the scattering term that establishes a correspondence between the signal measured by an ultrasound transducer and an intrinsic mechanical property of the imaged tissues. This expression shows that considering the scattering as a function of small variations in the density and speed of sound around their mean values along with classical assumptions in this domain is equivalent to associating the acoustical acquisition with a measure of the relative longitudinal bulk modulus.

Comparison of the model proposed to Jensen's earlier model shows that it is also appropriate to perform accurate simulations of the acoustical imaging process.

In this correspondence, we consider sampling continuous-time periodic bandlimited signals which contain additive shot noise.The classical sampling scheme does not perfectly recover these particular nonbandlimited signals but only reconstructs a lowpass filtered approximation. By modeling the shot noise as a stream of Dirac pulses, we first show that the sum of a bandlimited signal with a stream of Dirac pulses falls into the class of signals that contain a finite rate of innovation, that is, a finite number of degrees of freedom. Second, by taking into account the degrees of freedom of the bandlimited signal in the sampling and reconstruction scheme developed previously for streams of Dirac pulses, we derive a sampling and perfect reconstruction scheme for the bandlimited signal with additive shot noise.

This paper describes an application allowing content-based retrieval that can thus be considered as an MPEG-7 example application. The application may be called "sketch-based database retrieval" since the user interacts with the database by means of sketches. The user draws its request with a pencil: the request image is then a binary image that comprises a contour on a uniform bottom.

A link is established between the discrete Fourier transform (DFT) and two high-resolution methods, MUSIC and the Tufts-Kumaresan (1982) method (TK). The existence and location of the extraneous peaks of MUSIC and the noise zeros of TK are related to the minima of the DFT of the rectangular window filtering the data. Other properties of the noise zeros are given, in relation to polynomial theory.

We propose to design the reduction operator of an image pyramid so as to minimize the approximation error in the lp-sense (not restricted to the usual p = 2), where p can take non-integer values. The underlying image model is specified using shift-invariant basis functions, such as B-splines. The solution is well-defined and determined by an iterative optimization algorithm based on digital filtering. Its convergence is accelerated by the use of first and second order derivatives. For p close to 1, we show that the ringing is reduced and that the histogram of the detail image is sparse as compared with the standard case, where p = 2.

We present an optimal spline-based algorithm for the enlargement or reduction of digital images with arbitrary (noninteger) scaling factors. This projection-based approach can be realized thanks to a new finite difference method that allows the computation of inner products with analysis functions that are B-splines of any degree n. A noteworthy property of the algorithm is that the computational complexity per pixel does not depend on the scaling factor a. For a given choice of basis functions, the results of our method are consistently better than those of the standard interpolation procedure; the present scheme achieves a reduction of artifacts such as aliasing and blocking and a significant improvement of the signal-to-noise ratio. The method can be generalized to include other classes of piecewise polynomial functions, expressed as linear combinations of B-splines and their derivatives.

We propose a new technique to perform nonuniform to uniform grid conversion: first, interpolate using nonuniform splines, then project the resulting function onto a uniform spline space and finally, resample. We derive a closed form solution to the least-squares approximation problem. Our implementation is computationally exact and works for arbitrary sampling rates. We present examples that illustrate advantages of our projection technique over direct interpolation and resampling. The main benefit is the suppression of aliasing.

We propose to design the reduction operator of an image pyramid so as to minimize the approximation error in the lp sense (not restricted to the usual p = 2), where p can take non-integer values. The underlying image model is specified using arbitrary shift-invariant basis functions such as splines. The solution is determined by an iterative optimization algorithm, based on digital filtering. Its convergence is accelerated by the use of first and second derivatives. For p = 1, our modified pyramid is robust to outliers; edges are preserved better than in the standard case where p = 2. For 1 < p < 2, the pyramid decomposition combines the qualities of l1 and l2 approximations. The method is applied to edge detection and its improved performance over the standard formulation is determined.

We present an optimal spline-based algorithm for the enlargement or reduction of digital images with arbitrary scaling factors. A demonstration is available on the web at http://bigwww.epfl.ch/demo/jresize/. This projection-based approach is realizable thanks to a new finite difference method that allows the computation of inner products with analysis functions that are B-splines of any degree n. For a given choice of basis functions, the results of our method are consistently better that those of the standard interpolation procedure; the present scheme achieves a reduction of artifacts such as aliasing and blocking and a significant improvement of the signal-to-noise ratio.

This paper proposes a novel algorithmic framework to solve image restoration problems under sparsity assumptions. As usual, the reconstructed image is the minimum of an objective functional that consists of a data fidelity term and an l1 regularization. However, instead of estimating the reconstructed image that minimizes the objective functional directly, we focus on the restoration process that maps the degraded measurements to the reconstruction. Our idea amounts to parameterizing the process as a linear combination of few elementary thresholding functions (LET) and solve for the linear weighting coefficients by minimizing the objective functional. It is then possible to update the thresholding functions and to iterate this process (i-LET). The key advantage of such a linear parametrization is that the problem size reduces dramatically—each time we only need to solve an optimization problem over the dimension of the linear coefficients (typically less than 10) instead of the whole image dimension. With the elementary thresholding functions satisfying certain constraints, global convergence of the iterated LET algorithm is guaranteed. Experiments on several test images over a wide range of noise levels and different types of convolution kernels clearly indicate that the proposed framework usually outperforms state-of-art algorithms in terms of both CPU time and number of iterations.

We focus on image restoration that consists in regularizing a quadratic data-fidelity term with the standard l1 sparse-enforcing norm. We propose a novel algorithmic approach to solve this optimization problem. Our idea amounts to approximating the result of the restoration as a linear sum of basic thresholds (e.g. soft-thresholds) weighted by unknown coefficients. The few coefficients of this expansion are obtained by minimizing the equivalent low-dimensional l1-norm regularized objective function, which can be solved efficiently with standard convex optimization techniques, e.g. iterative reweighted least-squares (IRLS). By iterating this process, we claim that we reach the global minimum of the objective function. Experimentally we discover that very few iterations are required before we reach the convergence.

In this paper, we extend the theory of sampling signals with finite rate of innovation (FRI) to a specific class of two-dimensional curves, which are defined implicitly as the zeros of a mask function. Here the mask function has a parametric representation as a weighted summation of a finite number of complex exponentials, and therefore, has finite rate of innovation [1]. An associated edge image, which is discontinuous on the predefined parametric curve, is proved to satisfy a set of linear annihilation equations. We show that it is possible to reconstruct the parameters of the curve (i.e. to detect the exact edge positions in the continuous domain) based on the annihilation equations. Robust reconstruction algorithms are also developed to cope with scenarios with model mismatch. Moreover, the annihilation equations that characterize the curve are linear constraints that can be easily exploited in optimization problems for further image processing (e.g., image up-sampling). We demonstrate one potential application of the annihilation algorithm with examples in edge-preserving interpolation. Experimental results with both synthetic curves as well as edges of natural images clearly show the effectiveness of the annihilation constraint in preserving sharp edges, and improving SNRs.

We focus on a specific class of curves that can be parametrized using a finite number of variables in two dimensions. The corresponding indicator plane, which is a binary image, has infinite bandwith and can not be sampled and perfectly reconstructed with classical sampling theory. In this paper, we illustrate that it is possible to recover parameters from finite samples of the indicator plane and have a perfect reconstruction of the indicator plane. The algorithm presented here extends the application of FRI signals to multi-dimensional cases and may find its application in field, like super-resolution.

It is a classic problem to estimate continuous-time sparse signals, like point sources in a direction-of-arrival problem, or pulses in a time-of-flight measurement. The earliest occurrence is the estimation of sinusoids in time series using Prony's method. This is at the root of a substantial line of work on high resolution spectral estimation. The estimation of continuous-time sparse signals from discrete-time samples is the goal of the sampling theory for finite rate of innovation (FRI) signals. Both spectral estimation and FRI sampling usually assume uniform sampling. But not all measurements are obtained uniformly, as exemplified by a concrete radioastronomy problem we set out to solve. Thus, we develop the theory and algorithm to reconstruct sparse signals, typically sum of sinusoids, from non-uniform samples. We achieve this by identifying a linear transformation that relates the unknown uniform samples of sinusoids to the given measurements. These uniform samples are known to satisfy the annihilation equations. A valid solution is then obtained by solving a constrained minimization such that the reconstructed signal is consistent with the given measurements and satisfies the annihilation constraint. Thanks to this new approach, we unify a variety of FRI-based methods. We demonstrate the versatility and robustness of the proposed approach with five FRI reconstruction problems, namely Dirac reconstructions with irregular time or Fourier domain samples, FRI curve reconstructions, Dirac reconstructions on the sphere and point source reconstructions in radioastronomy. The proposed algorithm improves substantially over state of the art methods and is able to reconstruct point sources accurately from irregularly sampled Fourier measurements under severe noise conditions.

We propose a novel edge detection algorithm with sub-pixel accuracy based on annihilation of signals with finite rate of innovation. We show that the Fourier domain annihilation equations can be interpreted as spatial domain multiplications. From this new perspective, we obtain an accurate estimation of the edge model by assuming a simple parametric form within each localised block. Further, we build a locally adaptive global mask function (i.e, our edge model) for the whole image. The mask function is then used as an edge- preserving constraint in further processing. Numerical experiments on both edge localisations and image up-sampling show the effectiveness of the proposed approach, which out- performs state-of-the-art method.

Context. Two main classes of imaging algorithms have emerged in radio interferometry: the CLEAN algorithm and its multiple variants, and compressed-sensing inspired methods. They are both discrete in nature, and estimate source locations and intensities on a regular grid. For the traditional CLEAN-based imaging pipeline, the resolution power of the tool is limited by the width of the synthesized beam, which is inversely proportional to the largest baseline. The finite rate of innovation (FRI) framework is a robust method to find the locations of point-sources in a continuum without grid imposition. The continuous formulation makes the FRI recovery performance only dependent on the number of measurements and the number of sources in the sky. FRI can theoretically find sources below the perceived tool resolution. To date, FRI had never been tested in the extreme conditions inherent to radio astronomy: weak signal / high noise, huge data sets, large numbers of sources.Aims. The aims were (i) to adapt FRI to radio astronomy, (ii) verify it can recover sources in radio astronomy conditions with more accurate positioning than CLEAN, and possibly resolve some sources that would otherwise be missed, (iii) show that sources can be found using less data than would otherwise be required to find them, and (iv) show that FRI does not lead to an augmented rate of false positives.Methods. We implemented a continuous domain sparse reconstruction algorithm in Python. The angular resolution performance of the new algorithm was assessed under simulation, and with visibility measurements from the LOFAR telescope. Existing catalogs were used to confirm the existence of sources.Results. We adapted the FRI framework to radio interferometry, and showed that it is possible to determine accurate off-grid pointsource locations and their corresponding intensities. In addition, FRI-based sparse reconstruction required less integration time and smaller baselines to reach a comparable reconstruction quality compared to a conventional method. The achieved angular resolution is higher than the perceived instrument resolution, and very close sources can be reliably distinguished. The proposed approach has cubic complexity in the total number (typically around a few thousand) of uniform Fourier data of the sky image estimated from the reconstruction. It is also demonstrated that the method is robust to the presence of extended-sources, and that false-positives can be addressed by choosing an adequate model order to match the noise level.

The effect of multiplicative noise on a signal when compared with that of additive noise is very large. In this paper, we address the problem of suppressing multiplicative noise in one-dimensional signals. To deal with signals that are corrupted with multiplicative noise, we propose a denoising algorithm based on minimization of an unbiased estimator (MURE) of mean-square error (MSE). We derive an expression for an unbiased estimate of the MSE. The proposed denoising is carried out in wavelet domain (soft thresholding) by considering time-domain MURE. The parameters of thresholding function are obtained by minimizing the unbiased estimator MURE. We show that the parameters for optimal MURE are very close to the optimal parameters considering the oracle MSE. Experiments show that the SNR improvement for the proposed denoising algorithm is competitive with a state-of-the-art method.

This paper presents a number of data processing algorithms developed to improve the accuracy of results derived from datasets acquired by a recently designed terahertz handheld probe. These techniques include a baseline subtraction algorithm and a number of algorithms to extract the sample impulse response: double Gaussian inverse filtering, frequency-wavelet domain deconvolution, and sparse deconvolution. In vivo measurements of human skin are used as examples, and a comparison is made of the terahertz impulse response from a number of different skin positions. The algorithms presented enable both the spectroscopic and time domain properties of samples measured in reflection geometry to be better determined compared to previous calculation methods.

The construction and animation of face-objects in MPEG-4/SNHC (synthetic natural hybrid coding) systems implies content-based and semantic analysis of the observed 3D scene. These processes require sophisticated image processing tools and algorithms which are time-consuming and also not so suitable for videoconferencing applications. With regard to the coding process following the MPEG-4 standard, it is shown that 4 possible levels of coding are ready-to-send in an MPEG-4 data stream. The main functionalities sought in the MPEG-4 standard are data scalability, user data interactivity and opening to various kinds of coder schemes. With non-high-order semantic interpretation of the observed scene, it is well-known that it is possible to build and animate a facial 3D model. A transcoding system is then needed to fit to the MPEG-4 data stream format.

This paper deals with fast image and video segmentation using active contours. Region-based active contours using level sets are powerful techniques for video segmentation, but they suffer from large computational cost. A parametric active contour method based on B-Spline interpolation has been proposed in [1] to highly reduce the computational cost, but this method is sensitive to noise. Here, we choose to relax the rigid interpolation constraint in order to robustify our method in the presence of noise: by using smoothing splines, we trade a tunable amount of interpolation error for a smoother spline curve. We show by experiments on natural sequences that this new flexibility yields segmentation results of higher quality at no additional computational cost. Hence, real-time processing for moving objects segmentation is preserved.

This paper deals with fast image and video segmentation using active contours. Region based active contours using level-sets are powerful techniques for video segmentation but they suffer from large computational cost. A parametric active contour method based on B-Spline interpolation has been proposed in [1] to highly reduce the computational cost but this method is sensitive to noise. Here, we choose to relax the rigid interpolation constraint in order to robustify our method in the presence of noise: by using smoothing splines, we trade a tunable amount of interpolation error for a smoother spline curve. We show by experiments on natural sequences that this new flexibility yields segmentation results of higher quality at no additional computational cost. Hence real time processing for moving objects segmentation is preserved.

Signals, including signals from outside of the subspace of bandlimited signals associated with the Shannon theorem, are acquired while still providing an acceptable reconstruction. In some aspects a denoising process is used in conjunction with sparse sampling techniques. For example, a denoising process utilizing a Cadzow algorithm may be used to reduce the amount of noise associated with sampled information. In some aspects the denoising process may be iterative such that the denoising process is repeated until the samples are denoised to a sufficient degree. In some aspects, the denoising process converts a set of received samples into another set corresponding to a signal with a Finite Rate of Innovation (FRI), or to an approximation of such a signal. The disclosure relates in some aspects to combination of a denoising process with annihilating filter methods to retrieve information from a noisy, sparse sampled signal. The disclosure relates in some aspects to determining a sampling kernel to be used to sample the signal based on noise associated with the signal. The disclosure relates in some aspects to determining the number of samples to obtain from a signal over a period of time based on noise associated with the signal. The disclosure relates in some aspects to determining the finite number of innovations of a received signal.

We consider the problem of optimizing the parameters of a given denoising algorithm for restoration of a signal corrupted by white Gaussian noise. To achieve this, we propose to minimize Stein's unbiased risk estimate (SURE) which provides a means of assessing the true mean-squared error (MSE) purely from the measured data without need for any knowledge about the noise-free signal. Specifically, we present a novel Monte-Carlo technique which enables the user to calculate SURE for an arbitrary denoising algorithm characterized by some specific parameter setting. Our method is a black-box approach which solely uses the response of the denoising operator to additional input noise and does not ask for any information about its functional form. This, therefore, permits the use of SURE for optimization of a wide variety of denoising algorithms. We justify our claims by presenting experimental results for SURE-based optimization of a series of popular image-denoising algorithms such as total-variation denoising, wavelet soft-thresholding, and Wiener filtering/smoothing splines. In the process, we also compare the performance of these methods. We demonstrate numerically that SURE computed using the new approach accurately predicts the true MSE for all the considered algorithms. We also show that SURE uncovers the optimal values of the parameters in all cases.

We consider the problem of optimizing the parameters of an arbitrary denoising algorithm by minimizing Stein's Unbiased Risk Estimate (SURE) which provides a means of assessing the true mean-squared-error (MSE) purely from the measured data assuming that it is corrupted by Gaussian noise. To accomplish this, we propose a novel Monte-Carlo technique based on a black-box approach which enables the user to compute SURE for an arbitrary denoising algorithm with some specific parameter setting. Our method only requires the response of the denoising algorithm to additional input noise and does not ask for any information about the functional form of the corresponding denoising operator. This, therefore, permits SURE-based optimization of a wide variety of denoising algorithms (global-iterative, pointwise, etc). We present experimental results to justify our claims.

Shannon's sampling theory and its variants provide effective solutions to the problem of reconstructing a signal from its samples in some “shift-invariant” space, which may or may not be bandlimited. In this paper, we present some further justification for this type of representation, while addressing the issue of the specification of the best reconstruction space. We consider a realistic setting where a multidimensional signal is prefiltered prior to sampling, and the samples are corrupted by additive noise. We adopt a variational approach to the reconstruction problem and minimize a data fidelity term subject to a Tikhonov-like (continuous domain) L2-regularization to obtain the continuous-space solution. We present theoretical justification for the minimization of this cost functional and show that the globally minimal continuous-space solution belongs to a shift-invariant space generated by a function (generalized B-spline) that is generally not bandlimited. When the sampling is ideal, we recover some of the classical smoothing spline estimators. The optimal reconstruction space is characterized by a condition that links the generating function to the regularization operator and implies the existence of a B-spline-like basis. To make the scheme practical, we specify the generating functions corresponding to the most popular families of regularization operators (derivatives, iterated Laplacian), as well as a new, generalized one that leads to a new brand of Matérn splines.We conclude the paper by proposing a stochastic interpretation of the reconstruction algorithm and establishing an equivalence with the minimax and minimum mean square error (MMSE/Wiener) solutions of the generalized sampling problem.

We address the problem of denoising images corrupted by multiplicative noise. The noise is assumed to follow a Gamma distribution. Compared with additive noise distortion, the effect of multiplicative noise on the visual quality of images is quite severe. We consider the mean-square error (MSE) cost function and derive an expression for an unbiased estimate of the MSE. The resulting multiplicative noise unbiased risk estimator is referred to as MURE. The denoising operation is performed in the wavelet domain by considering the image-domain MURE. The parameters of the denoising function (typically, a shrinkage of wavelet coefficients) are optimized for by minimizing MURE. We show that MURE is accurate and close to the oracle MSE. This makes MURE-based image denoising reliable and on par with oracle-MSE-based estimates. Analogous to the other popular risk estimation approaches developed for additive, Poisson, and chi-squared noise degradations, the proposed approach does not assume any prior on the underlying noise-free image. We report denoising results for various noise levels and show that the quality of denoising obtained is on par with the oracle result and better than that obtained using some state-of-the-art denoisers.

We address the problem of exact signal recovery in frequency domain optical coherence tomography (FDOCT) systems. Our technique relies on the fact that, in a spectral interferometry setup, the intensity of the total signal reflected from the object is smaller than that of the reference arm. We develop a novel algorithm to compute the reflected signal amplitude from the interferometric measurements. Our technique is non-iterative, non-linear and it leads to an exact solution in the absence of noise. The reconstructed signal is free from artifacts such as the autocorrelation noise that is normally encountered in the conventional inverse Fourier transform techniques. We present results on synthesized data where we have a benchmark for comparing the performance of the technique. We also report results on experimental FDOCT measurements of the retina of the human eye.

Frequency domain optical coherence tomography (FDOCT) is a new technique that is well-suited for fast imaging of biological specimens, as well as non-biological objects. The measurements are in the frequency domain, and the objective is to retrieve an artifact-free spatial domain description of the specimen. In this paper, we develop a new technique for model-based retrieval of spatial domain data from the frequency domain data. We use a piecewise-constant model for the refractive index profile that is suitable for multi-layered specimens. We show that the estimation of the layered structure parameters can be mapped into a harmonic retrieval problem, which enables us to use high-resolution spectrum estimation techniques. The new technique that we propose is efficient and requires few measurements. We also analyze the effect of additive measurement noise on the algorithm performance. The experimental results show that the technique gives highly accurate parameter estimates. For example, at 25dB signal-to-noise ratio, the mean square error in the position estimate is about 0.01% of the actual value.

Complex wavelet transforms offer the opportunity to perform directional and coherent processing based on the local magnitude and phase of signals and images. Although denoising, segmentation, and image enhancement are significantly improved using complex wavelets, the redundancy of most current transforms hinders their application in compression and related problems. In this paper we introduce a new orthonormal complex wavelet transform with no redundancy for both real— and complex-valued signals. The transform's filterbank features a real lowpass filter and two complex highpass filters arranged in a critically sampled, three-band structure. Placing symmetry and orthogonality constraints on these filters, we find that each high-pass filter can be factored into a real highpass filter followed by an approximate Hilbert transform filter.

A new method based on the Young-Laplace equation for measuring contact angles and surface tensions is presented. In this approach, a first-order perturbation technique helps to analytically solve the Young-Laplace equation according to photographic images of axisymmetric sessile drops. When appropriate, the calculated drop contour is extended by mirror symmetry so that reflection of the drop into substrate allows the detection of position of the contact points. To keep a wide range of applicability, a discretisation of the drop's profile is not realised; instead, an optimisation of an advanced image-energy term fits an approximation of the Young-Laplace equation to drop boundaries. In addition, cubic B-spline interpolation is applied to the image of the drop to reach subpixel resolution. To demonstrate the method's accuracy, simulated drops as well as images of liquid coal ash slags were analysed. Thanks to the high-quality image interpolation model and the image-energy term, the experiments demonstrated robust measurements over a wide variety of image types and qualities. The method was implemented in Java and is freely available [A.F. Stalder, LBADSA, Biomedical Imaging Group, EPFL: download link].

This chapter presents a survey of interpolation and resampling techniques in the context of exact, separable interpolation of regularly sampled data. In this context, the traditional view of interpolation is to represent an arbitrary continuous function as a discrete sum of weighted and shifted synthesis functions—in other words, a mixed convolution equation. An important issue is the choice of adequate synthesis functions that satisfy interpolation properties. Examples of finite-support ones are the square pulse (nearest-neighbor interpolation), the hat function (linear interpolation), the cubic Keys' function, and various truncated or windowed versions of the sinc function. On the other hand, splines provide examples of infinite-support interpolation functions that can be realized exactly at a finite, surprisingly small computational cost. We discuss implementation issues and illustrate the performance of each synthesis function. We also highlight several artifacts that may arise when performing interpolation, such as ringing, aliasing, blocking and blurring. We explain why the approximation order inherent in the synthesis function is important to limit these interpolation artifacts, which motivates the use of splines as a tunable way to keep them in check without any significant cost penalty.

An interpolation model is a necessary ingredient of intensity-based registration methods. The properties of such a model depend entirely on its basis function, which has been traditionally characterized by features such as its order of approximation and its support. However, as has been recently shown, these features are blind to the amount of registration bias created by the interpolation process alone; an additional requirement that has been named constant-variance interpolation is needed to remove this bias.

In this paper, we present a theoretical investigation of the role of the interpolation basis in a registration context. Contrarily to published analyses, ours is deterministic; it nevertheless leads to the same conclusion, which is that constant-variance interpolation is beneficial to image registration.

In addition, we propose a novel family of interpolation bases that can have any desired order of approximation while maintaining the constant-variance property. Our family includes every constant-variance basis we know of. It is described by an explicit formula that contains two free functional terms: an arbitrary 1-periodic binary function that takes values from {-1, 1}, and another arbitrary function that must satisfy the partition of unity. These degrees of freedom can be harnessed to build many family members for a given order of approximation and a fixed support. We provide the example of a symmetric basis with two orders of approximation that is supported over [-3 ⁄ 2, 3 ⁄ 2]; this support is one unit shorter than a basis of identical order that had been previously published.

Based on the theory of approximation, this paper presents a unified analysis of interpolation and resampling techniques. An important issue is the choice of adequate basis functions. We show that, contrary to the common belief, those that perform best are not interpolating. By opposition to traditional interpolation, we call their use generalized interpolation; they involve a prefiltering step when correctly applied. We explain why the approximation order inherent in any basis function is important to limit interpolation artifacts. The decomposition theorem states that any basis function endowed with approximation order can be expressed as the convolution of a B-spline of the same order with another function that has none. This motivates the use of splines and spline-based functions as a tunable way to keep artifacts in check without any significant cost penalty. We discuss implementation and performance issues, and we provide experimental evidence to support our claims.

The most essential ingredient of interpolation is its basis function. We have shown in previous papers that this basis need not be necessarily interpolating to achieve good results. On the contrary, several recent studies have confirmed that non-interpolating bases, such as B-splines and O-moms, perform best. This opens up a much wider choice of basis functions. In this paper, we give to the designer the tools that will allow him to characterize this enlarged space of functions. In particular, he will be able to specify up-front the four most important parameters for image processing: degree, support, regularity, and order. The theorems presented here will then allow him to refine his design by dealing with additional coefficients that can be selected freely, without interfering with the main design parameters.

Errata

The following erratum applies to the printed proceedings (the version that you can download below is amended):

The central theme of this pair of papers (Parts I and II in this issue) is self-similarity, which is used as a bridge for connecting splines and fractals. The first part of the investigation is deterministic, and the context is that of L-splines; these are defined in the following terms: s(t) is a cardinal L-spline iff L{s(t)} = ∑k∈Za[k] δ(t−k), where L is a suitable pseudodifferential operator. Our starting point for the construction of “self-similar” splines is the identification of the class of differential operators L that are both translation and scale invariant. This results into a two-parameter family of generalized fractional derivatives, ∂τγ, where γ is the order of the derivative and τ is an additional phase factor. We specify the corresponding L-splines, which yield an extended class of fractional splines. The operator ∂τγ is used to define a scale-invariant energy measure—the squared L2-norm of the γth derivative of the signal—which provides a regularization functional for interpolating or fitting the noisy samples of a signal. We prove that the corresponding variational (or smoothing) spline estimator is a cardinal fractional spline of order 2γ, which admits a stable representation in a B-spline basis. We characterize the equivalent frequency response of the estimator and show that it closely matches that of a classical Butterworth filter of order 2γ. We also establish a formal link between the regularization parameter λ and the cutoff frequency of the smoothing spline filter: ω0 ≅ λ−2γ. Finally, we present an efficient computational solution to the fractional smoothing spline problem: It uses the fast Fourier transform and takes advantage of the multiresolution properties of the underlying basis functions.

We introduce an extended class of cardinal L*L-splines, where L is a pseudo-differential operator satisfying some admissibility conditions. We show that the L*L-spline signal interpolation problem is well posed and that its solution is the unique minimizer of the spline energy functional, ||L s||L22, subject to the interpolation constraint. Next, we consider the corresponding regularized least squares estimation problem, which is more appropriate for dealing with noisy data. The criterion to be minimized is the sum of a quadratic data term, which forces the solution to be close to the input samples, and a “smoothness” term that privileges solutions with small spline energies. Here, too, we find that the optimal solution, among all possible functions, is a cardinal L*L-spline. We show that this smoothing spline estimator has a stable representation in a B-spline-like basis and that its coefficients can be computed by digital filtering of the input signal. We describe an efficient recursive filtering algorithm that is applicable whenever the transfer function of L is rational (which corresponds to the case of exponential splines).

We justify these algorithms statistically by establishing an equivalence between L*L smoothing splines and the minimum mean square error (MMSE) estimation of a stationary signal corrupted by white Gaussian noise. In this model-based formulation, the optimum operator L is the whitening filter of the process, and the regularization parameter is proportional to the noise variance. Thus, the proposed formalism yields the optimal discretization of the classical Wiener filter, together with a fast recursive algorithm. It extends the standard Wiener solution by providing the optimal interpolation space. We also present a Bayesian interpretation of the algorithm.

Causal exponentials play a fundamental role in classical system theory. Starting from those elementary building blocks, we propose a complete and self-contained signal processing formulation of exponential splines defined on a uniform grid. We specify the corresponding B-spline basis functions and investigate their reproduction properties (Green function and exponential polynomials); we also characterize their stability (Riesz bounds). We show that the exponential B-spline framework allows an exact implementation of continuous-time signal processing operators including convolution, differential operators, and modulation, by simple processing in the discrete B-spline domain. We derive efficient filtering algorithms for multiresolution signal extrapolation and approximation, extending earlier results for polynomial splines. Finally, we present a new asymptotic error formula that predicts the magnitude and the Nth-order decay of the L2-approximation error as a function of the knot spacing T.

We introduce an extended class of cardinal L-splines where L is a pseudo-differential—but not necessarily local—operator satisfying some admissibility conditions. This family is quite general and includes a variety of standard constructions including the polynomial, elliptic, exponential, and fractional splines. In order to fit such splines to the noisy samples of a signal, we specify a corresponding smoothing spline problem which involves an L-semi-norm regularization term. We prove that the optimal solution, among all possible functions, is a cardinal L*L-spline which has a stable representation in a B-spline-like basis. We show that the coefficients of this spline estimator can be computed by digital filtering of the input samples; we also describe an efficient recursive filtering algorithm that is applicable whenever the transfer function of L is rational.

We justify this procedure statistically by establishing an equivalence between L*L smoothing splines and the MMSE (minimum mean square error) estimation of a stationary signal corrupted by white Gaussian noise. In this model-based formulation, the optimum operator L is the whitening filter of the process, and the regularization parameter is proportional to the noise variance. Thus, the proposed formalism yields the optimal discretization of the classical Wiener filter, together with a fast recursive algorithm. It extends the standard Wiener solution by providing the optimal interpolation space. We also present a Bayesian interpretation of such spline estimators.

The LeGall 5⁄3 and Daubechies 9⁄7 filters have risen to special prominence because they were selected for inclusion in the JPEG2000 standard. Here, we determine their key mathematical features: Riesz bounds, order of approximation, and regularity (Hölder and Sobolev). We give approximation theoretic quantities such as the asymptotic constant for the L2 error and the angle between the analysis and synthesis spaces which characterizes the loss of performance with respect to an orthogonal projection. We also derive new asymptotic error formulæ that exhibit bound constants that are proportional to the magnitude of the first nonvanishing moment of the wavelet. The Daubechies 9⁄7 stands out because it is very close to orthonormal, but this turns out to be slightly detrimental to its asymptotic performance when compared to other wavelets with four vanishing moments.

In this paper, we revisit wavelet theory starting from the representation of a scaling function as the convolution of a B-spline (the regular part of it) and a distribution (the irregular or residual part). This formulation leads to some new insights on wavelets and makes it possible to rederive the main results of the classical theory—including some new extensions for fractional orders—in a self-contained, accessible fashion. In particular, we prove that the B-spline component is entirely responsible for five key wavelet properties: order of approximation, reproduction of polynomials, vanishing moments, multiscale differentiation property, and smoothness (regularity) of the basis functions. We also investigate the interaction of wavelets with differential operators giving explicit time domain formulas for the fractional derivatives of the basis functions. This allows us to specify a corresponding dual wavelet basis and helps us understand why the wavelet transform provides a stable characterization of the derivatives of a signal. Additional results include a new peeling theory of smoothness, leading to the extended notion of wavelet differentiability in the Lp-sense and a sharper theorem stating that smoothness implies order.

We show that a multi-dimensional scaling function of order γ (possibly fractional) can always be represented as the convolution of a polyharmonic B-spline of order γ and a distribution with a bounded Fourier transform which has neither order nor smoothness. The presence of the B-spline convolution factor explains all key wavelet properties: order of approximation, reproduction of polynomials, vanishing moments, multi-scale differentiation property, and smoothness of the basis functions. The B-spline factorization also gives new insights on the stability of wavelet bases with respect to differentiation. Specifically, we show that there is a direct correspondence between the process of moving a B-spline factor from one side to another in a pair of biorthogonal scaling functions and the exchange of fractional integrals/derivatives on their wavelet counterparts. This result yields two “eigen-relations” for fractional differential operators that map biorthogonal wavelet bases into other stable wavelet bases. This formulation provides a better understanding as to why the Sobolev/Besov norm of a signal can be measured from the lp-norm of its rescaled wavelet coefficients. Indeed, the key condition for a wavelet basis to be an unconditional basis of the Besov space Bqs(Lp(Rd)) is that the s-order derivative of the wavelet be in Lp.

Recently, we came up with two interesting generalizations of polynomial splines by extending the degree of the generating functions to both real and complex exponents. While these may qualify as exotic constructions at first sight, we show here that both types of splines (fractional and complex) play a truly fundamental role in wavelet theory and that they lead to a better understanding of what wavelets really are.

To this end, we first revisit wavelet theory starting from the representation of a scaling function as the convolution of a B-spline (the regular part of it) and a distribution (the irregular or residual part). This formulation leads to some new insights on wavelets and makes it possible to re-derive the main results of the classical theory—including some new extensions for fractional orders—in a self-contained, accessible fashion. In particular, we prove that the B-spline component is entirely responsible for five key wavelet properties: order of approximation, reproduction of polynomials, vanishing moments, multi-scale differentiation, and smoothness (regularity) of the basis functions.

Second, we show that any scaling function can be expanded as a sum of harmonic splines (a particular subset of the splines with complex exponents); these play essentially the same role here as the Fourier exponentials do for periodic signals. This harmonic expansion provides an explicit time-domain representation of scaling functions and wavelets; it also explains their fractal nature. Remarkably, truncating the expansion preserves the essential multiresolution property (two-scale relation). Keeping the first term alone yields a fractional-spline approximation that captures most of the important wavelet features; e.g., its general shape and smoothness.

We introduce the concept of fractional wavelets which extends the conventional theory to non-integer orders. This allows for the construction of new wavelet bases that are indexed by a continuously-varying order parameter, as opposed to an integer. An essential feature of the method is to gain control over the key wavelet properties (regularity, time-frequency localization, etc…). Practically, this translates into the fact that all important wavelet parameters are adjustable in a continuous fashion so that the new basis functions can be fine-tuned for the application at hand. We present some specific examples of wavelets (fractional splines) and investigate the main implications of the fractional order property. In particular, we prove that these wavelets essentially behave like fractional derivative operators which makes them good candidates for the analysis and synthesis of fractal-like processes. We also consider non-separable extensions to quincunx lattices which are well suited for image processing. Finally, we deal with the practical aspect of the evaluation of these transforms and present a fast implementation based on the FFT.

Compact support is undoubtedly one of the wavelet properties that is given the greatest weight both in theory and applications. It is usually believed to be essential for two main reasons: (1) to have fast numerical algorithms, and (2) to have good time or space localization properties. Here, we argue that this constraint is unnecessarily restrictive and that fast algorithms and good localization can also be achieved with non-compactly supported basis functions. By dropping the compact support requirement, one gains in flexibility. This opens up new perspectives such as fractional wavelets whose key parameters (order, regularity, etc…) are tunable in a continuous fashion. To make our point, we draw an analogy with the closely related task of image interpolation. This is an area where it was believed until very recently that interpolators should be designed to be compactly supported for best results. Today, there is compelling evidence that non-compactly supported interpolators (such as splines, and others) provide the best cost/performance tradeoff.

In the first part, we present the theory of fractional splines; an extension of the polynomial splines for non-integer degrees. Their basic constituents are piecewise power functions of degree α. The corresponding B-splines are obtained through a localization process similar to the classical one, replacing finite differences by fractional differences. We show that the fractional B-splines share virtually all the properties of the classical B-splines, including the two-scale relation, and can therefore be used to define new wavelet bases with a continuously varying order parameter. We discuss some of their remarkable properties; in particular, the fact that the fractional spline wavelets behave like fractional derivatives of order α + 1.

In the second part, we turn to applications. We first describe a fast implementation of the fractional wavelet transform, which is essential to make the method practical. We then present an application of fractional splines to tomographic reconstruction where we take advantage of explicit formulas for computing the fractional derivatives of splines. We also make the connection with ridgelets. Finally, we consider the use of fractional wavelets for the detection and localization of brain activation in fMRI sequences. Here, we take advantage of the continuously varying order parameter which allows us to fine-tune the localization properties of the basis functions.

We extend Schoenberg's family of polynomial splines with uniform knots to all fractional degrees α > -1. These splines, which involve linear combinations of the one-sided power functions x+α = max(0, x)α, belong to L1 and are α-Hölder continuous for α > 0. We construct the corresponding B-splines by taking fractional finite differences and provide an explicit characterization in both time and frequency domains. We show that these functions satisfy most of the properties of the traditional B-splines, including the convolution property, and a generalized fractional differentiation rule that involves finite differences only. We characterize the decay of the B-splines which are not compactly supported for non-integral α's. Their most astonishing feature (in reference to the Strang-Fix theory) is that they have a fractional order of approximation α + 1 while they reproduce the polynomials of degree [α]. For α > 1/2, they satisfy all the requirements for a multiresolution analysis of L2 (Riesz bounds, two scale relation) and may therefore be used to build new families of wavelet bases with a continuously-varying order parameter. Our construction also yields symmetrized fractional B-splines which provide the connection with Duchon's general theory of radial (m,s)-splines (including thin-plate splines). In particular, we show that the symmetric version of our splines can be obtained as solution of a variational problem involving the norm of a fractional derivative. (Front cover).

Wavelets and radial basis functions (RBF) are two rather distinct ways of representing signals in terms of shifted basis functions. An essential aspect of RBF, which makes the method applicable to non-uniform grids, is that the basis functions, unlike wavelets, are non-local—in addition, they do not involve any scaling at all. Despite these fundamental differences, we show that the two types of representation are closely connected. We use the linear splines as motivating example. These can be constructed by using translates of the one-side ramp function (which is not localized), or, more conventionally, by using the shifts of a linear B-spline. This latter function, which is the prototypical example of a scaling function, can be obtained by localizing the one-side ramp function using finite differences. We then generalize the concept and identify the whole class of self-similar radial basis functions that can be localized to yield conventional multiresolution wavelet bases. Conversely, we prove that, for any compactly supported scaling function φ(x), there exists a one-sided central basis function ρ+(x) that spans the same multiresolution subspaces. The central property is that the multiresolution bases are generated by simple translation of ρ+, without any dilation.

We extend Schoenberg's B-splines to all fractional degrees α > -1/2. These splines are constructed using linear combinations of the integer shifts of the power functions x+α(one-sided) or |x|*α(symmetric); in each case, they are α-Hölder continuous for α > 0. They satisfy most of the properties of the traditional B-splines; in particular, the Riesz basis condition and the two-scale relation, which makes them suitable for the construction of new families of wavelet bases. What is especially interesting from a wavelet perspective is that the fractional B-splines have a fractional order of approximation (α+1), while they reproduce the polynomials of degree [α]. We show how they yield continuous-order generalizations of the orthogonal Battle-Lemarié wavelets and of the semi-orthogonal B-spline wavelets. As α increases, these latter wavelets tend to be optimally localized in time and frequency in the sense specified by the uncertainty principle. The corresponding analysis wavelets also behave like fractional differentiators; they may therefore be used to whiten fractional Brownian motion processes.

We present new quantitative results for the characterization of the L2-error of wavelet-like expansions as a function of the scale a. This yields an extension as well as a simplification of the asymptotic error formulas that have been published previously. We use our bound determinations to compare the approximation power of various families of wavelet transforms. We present explicit formulas for the leading asymptotic constant for both splines and Daubechies wavelets. For a specified approximation error, this allows us to predict the sampling rate reduction that can obtained by using splines instead Daubechies wavelets. In particular, we prove that the gain in sampling density (splines vs. Daubechies) converges to π as the order goes to infinity.

We extend Schoenberg's family of polynomial splines with uniform knots to all fractional degrees α>-1/2. These splines, which involve linear combinations of the one sided power functions x+α=max{0,x}α, are α-Hölder continuous for α≥0. We construct the corresponding B-splines by taking fractional finite differences and provide an explicit characterization in both time and frequency domains. We show that these functions satisfy most of the properties of the traditional B-splines, including the convolution property, and a generalized fractional differentiation rule that involves finite differences only. We characterize the decay of the fractional B-splines which are not compactly supported for non-integral α's.

We develop a spline calculus for dealing with fractional derivatives. After a brief review of fractional splines, we present the main formulas for computing the fractional derivatives of the underlying basis functions. In particular, we show that the γth fractional derivative of a B-spline of degree α (not necessarily integer) is given by the γth fractional difference of a B-spline of degree α-γ. We use these results to derive an improved version the filtered backprojection algorithm for tomographic reconstruction. The projection data is first interpolated with splines; the continuous model is then used explicitly for an exact implementation of the filtering and backprojection steps.

A signal processing method for estimating a frequency domain representation of signal from a series of samples distorted by an instrument function, the method comprising obtaining the series of samples; obtaining a set of coefficients that fit a set of basis functions to a complex exponential function wherein the set of basis functions comprises a plurality of basis functions each defined by a shifted version of the instrument function in a signal domain; estimating the frequency domain representation of the signal based on the series of samples and the coefficients. This is wherein the estimate of the instrument function is based on a characterisation of the instrument function in the frequency domain at frequencies associated with the complex exponential function.

This paper addresses the problem of sampling non-bandlimited signals within the Finite Rate of Innovation (FRI) setting.
We had previously shown that, by using sampling kernels whose integer span contains specific exponentials (generalized Strang-Fix conditions), it is possible to devise non-iterative, fast reconstruction algorithms from very low-rate samples. Yet, the accuracy and sensitivity to noise of these algorithms is highly dependent on these exponential reproducing kernels --- actually, on the exponentials that they reproduce.
Hence, our first contribution here is to provide clear guidelines on how to choose the sampling kernels optimally, in such a way that the reconstruction quality is maximized in the presence of noise. The optimality of these kernels is validated by comparing with Cramér-Rao's lower bounds (CRLB).
Our second contribution is to relax the exact exponential reproduction requirement. Instead, we demonstrate that arbitrary sampling kernels can reproduce the "best" exponentials within quite a high accuracy in general, and that applying the exact FRI algorithms in this approximate context results in near-optimal reconstruction accuracy for practical noise levels. Essentially, we propose a universal extension of the FRI approach to arbitrary sampling kernels.
Numerical results checked against the CRLB validate the various contributions of the paper and in particular outline the ability of arbitrary sampling kernels to be used in FRI algorithms.

The theory of Finite Rate of Innovation (FRI) broadened the traditional sampling paradigm to certain classes of parametric signals. In the presence of noise, the original procedures are not as stable, and a different treatment is needed.
In this paper we review the ideal FRI sampling scheme and some of the existing techniques to combat noise. We then present alternative denoising methods for the case of exponential reproducing kernels. We first vary existing subspace-based approaches. We also discuss how to design exponential reproducing kernels that are most robust to noise.

Many experimental paradigms in biology aim at studying the response to coordinated stimuli. In dynamic imaging experiments, the observed data is often not straightforward to interpret and not directly measurable in a quantitative fashion. Consequently, the data is typically preprocessed in an ad hoc fashion and the results subjected to a statistical inference at the level of a population. We propose a new framework for analyzing time-lapse images that exploits some a priori knowledge on the type of temporal response and takes advantage of the spatial correlation of the data. This is achieved by processing the data in the wavelet domain and expressing the time course of each wavelet coefficient by a linear model. We end up with a statistical map in the spatial domain for the contrast of interest (i.e., the stimulus response). The feasibility of the method is demonstrated by an example of intrinsic microscopy imaging of mice's brains during coordinated sensory stimulation.

Optical imaging is a powerful technique to map brain function in animals. In this study, we consider in vivo optical imaging of the murine olfactory bulb, using an intrinsic signal and a genetically expressed activity reporter fluorescent protein (synaptopHfluorin). The aim is to detect odor-evoked activations that occur in small spherical structures of the olfactory bulb called glomeruli. We propose a new way of analyzing this kind of data that combines a linear model (LM) fitting along the temporal dimension, together with a discrete wavelet transform (DWT) along the spatial dimensions. We show that relevant regressors for the LM are available for both types of optical signals. In addition, the spatial wavelet transform allows us to exploit spatial correlation at different scales, and in particular to extract activation patterns at the expected size of glomeruli. Our framework also provides a statistical significance for every pixel in the activation maps and it has strong type I error control.

Polyharmonic B-splines are excellent basis functions to build multidimensional wavelet bases. These functions are nonseparable, multidimensional generators that are localized versions of radial basis functions. We show that Rabut's elementary polyharmonic B-splines do not converge to a Gaussian as the order parameter increases, as opposed to their separable B-spline counterparts. Therefore, we introduce a more isotropic localization operator that guarantees this convergence, resulting into the isotropic polyharmonic B-splines. Next, we focus on the two-dimensional quincunx subsampling scheme. This configuration is of particular interest for image processing, because it yields a finer scale progression than the standard dyadic approach. However, up until now, the design of appropriate filters for the quincunx scheme has mainly been done using the McClellan transform. In our approach, we start from the scaling functions, which are the polyharmonic B-splines and, as such, explicitly known, and we derive a family of polyharmonic spline wavelets corresponding to different flavors of the semi-orthogonal wavelet transform; e.g., orthonormal, B-spline, and dual. The filters are automatically specified by the scaling relations satisfied by these functions. We prove that the isotropic polyharmonic B-spline wavelet converges to a combination of four Gabor atoms, which are well separated in the frequency domain. We also show that these wavelets are nearly isotropic and that they behave as an iterated Laplacian operator at low frequencies. We describe an efficient fast Fourier transform-based implementation of the discrete wavelet transform based on polyharmonic B-splines. Finally, we propose a new way to build directional wavelets using modified polyharmonic B-splines. This approach benefits from the previous results (construction of the wavelet filters, fast implementation,…) but allows one to recover directional information about the edges from the (complex-valued) wavelet coefficients.

The approximate behavior of wavelets as differential operators is often considered as one of their most fundamental properties. In this paper, we investigate how we can further improve on the wavelet's behavior as differentiator. In particular, we propose semi-orthogonal differential wavelets. The semi-orthogonality condition ensures that wavelet spaces are mutually orthogonal. The operator, hidden within the wavelet, can be chosen as a generalized differential operator ∂τγ, for a γ-th order derivative with shift τ. Both order of derivation and shift can be chosen fractional. Our design leads us naturally to select the fractional B-splines as scaling functions. By putting the differential wavelet in the perspective of a derivative of a smoothing function, we find that signal singularities are compactly characterized by at most two local extrema of the wavelet coefficients in each subband. This property could be beneficial for signal analysis using wavelet bases. We show that this wavelet transform can be efficiently implemented using FFTs.

We propose the use of polyharmonic B-splines to build non-separable two-dimensional wavelet bases. The central idea is to base our design on the isotropic polyharmonic B-splines, a new type of polyharmonic B-splines that do converge to a Gaussian as the order increases. We opt for the quincunx subsampling scheme which allows us to characterize the wavelet spaces with a single wavelet: the isotropic-polyharmonic B-spline wavelet. Interestingly, this wavelet converges to a combination of four Gabor atoms, which are well separated in frequency domain. We also briefly discuss our Fourier-based implementation and present some experimental results.

The measurement of brain activity in a noninvasive way is an essential element in modern neurosciences. Modalities such as electroencephalography (EEG) and magnetoencephalography (MEG) recently gained interest, but two classical techniques remain predominant. One of them is positron emission tomography (PET), which is costly and lacks temporal resolution but allows the design of tracers for specific tasks; the other main one is functional magnetic resonance imaging (fMRI), which is more affordable than PET from a technical, financial, and ethical point of view, but which suffers from poor contrast and low signal-to-noise ratio (SNR). For this reason, advanced methods have been devised to perform the statistical analysis of fMRI data.

Recently, we have proposed a new framework for detecting brain activity from fMRI data, which is based on the spatial discrete wavelet transform. The standard wavelet-based approach performs a statistical test in the wavelet domain, and therefore fails to provide a rigorous statistical interpretation in the spatial domain. The new framework provides an “integrated” approach: the data is processed in the wavelet domain (by thresholding wavelet coefficients), and a suitable statistical testing procedure is applied afterwards in the spatial domain. This method is based on conservative assumptions only and has a strong type-I error control by construction. At the same time, it has a sensitivity comparable to that of SPM. Here, we discuss the extension of our algorithm to the redundant discrete wavelet transform, which provides a shift-invariant detection scheme. The key features of our technique are illustrated with experimental results. An implementation of our framework is available as a toolbox (WSPM) for the SPM2 software.

In this paper, we use polyharmonic B-splines to build multidimensional wavelet bases. These functions are nonseparable, multidimensional basis functions that are localized versions of radial basis functions. We show that Rabut's elementary polyharmonic B-splines do not converge to a Gaussian as the order parameter increases, as opposed to their separable B-spline counterparts. Therefore, we introduce a more isotropic localization operator that guarantees this convergence, resulting into the isotropic polyharmonic B-splines. Next, we focus on the two-dimensional quincunx subsampling scheme. This configuration is of particular interest for image processing, because it yields a finer scale progression than the standard dyadic approach. However, up until now, the design of appropriate filters for the quincunx scheme has mainly been done using the McClellan transform. In our approach, we start from the scaling functions, which are the polyharmonic B-splines and, as such, explicitly known, and we derive a family of polyharmonic spline wavelets corresponding to different flavors of the semi-orthogonal wavelet transform; e.g., orthonormal, B-spline, and dual. The filters are automatically specified by the scaling relations satisfied by these functions. We prove that the isotropic polyharmonic B-spline wavelet converges to a combination of four Gabor atoms, which are well separated in the frequency domain. We also show that these wavelets are nearly isotropic and that they behave as an iterated Laplacian operator at low frequencies. We describe an efficient fast Fourier transform-based implementation of the discrete wavelet transform based on polyharmonic B-splines.

The dilation matrix associated with the three-dimensional (3-D) face-centered cubic (FCC) sublattice is often considered to be the natural 3-D extension of the two-dimensional (2-D) quincunx dilation matrix. However, we demonstrate that both dilation matrices are of different nature: while the 2-D quincunx matrix is a similarity transform, the 3-D FCC matrix is not. More generally, we show that is impossible to obtain a dilation matrix that is a similarity transform and performs downsampling of the Cartesian lattice by a factor of two in more than two dimensions. Furthermore, we observe that the popular 3-D FCC subsampling scheme alternates between three different lattices: Cartesian, FCC, and quincunx. The latter one provides a less isotropic sampling density, a property that should be taken into account to properly orient 3-D data before processing using such a subsampling matrix.

Recently, we have proposed a new framework for detecting brain activity from fMRI data, which is based on the spatial discrete wavelet transform. The standard wavelet-based approach performs a statistical test in the wavelet domain, and therefore fails to provide a rigorous statistical interpretation in the spatial domain. The new framework provides an “integrated” approach: the data is processed in the wavelet domain (e.g., by thresholding wavelet coefficients), and a suitable statistical testing procedure is applied afterwards in the spatial domain. This method is based on conservative assumptions only and has a strong type-I error control by construction. At the same time, it has a sensitivity comparable to that of SPM.

We introduce an integrated framework for detecting brain activity from fMRI data, which is based on a spatial discrete wavelet transform. Unlike the standard wavelet-based approach for fMRI analysis, we apply the suitable statistical test procedure in the spatial domain. For a desired significance level, this scheme has one remaining degree of freedom, characterizing the wavelet processing, which is optimized according to the principle of minimal approximation error. This allows us to determine the threshold values in a way that does not depend on data. While developing our framework, we make only conservative assumptions. Consequently, the detection of activation is based on strong evidence. We have implemented this framework as a toolbox (WSPM) for the SPM2 software, taking advantage of multiple options and functions of SPM such as the setup of the linear model and the use of the hemodynamic response function. We show by experimental results that our method is able to detect activation patterns; the results are comparable to those obtained by SPM even though statistical assumptions are more conservative.

Wavelet-based statistical analysis methods for fMRI are able to detect brain activity without smoothing the data. Typically, the statistical inference is performed in the wavelet domain by testing the t-values of each wavelet coefficient; subsequently, an activity map is reconstructed from the significant coefficients. The limitation of this approach is that there is no direct statistical interpretation of the reconstructed map. In this paper, we propose a new methodology that takes advantage of wavelet processing but keeps the statistical meaning in the spatial domain. We derive a spatial threshold with a proper non-stationary component and determine optimal threshold values by minimizing an approximation error. The sensitivity of our method is comparable to SPM's (Statistical Parametric Mapping).

Wavelet-based methods for the statistical analysis of functional magnetic resonance images (fMRI) are able to detect brain activity without smoothing the data (3D space + time). Up to now, the statistical inference was typically performed in the wavelet domain by testing the t-values of each wavelet coefficient; the activity map was reconstructed from the significant coefficients. The limitation of this approach is that there is no direct statistical interpretation of the reconstructed map. Here, we describe a new methodology that takes advantage of wavelet processing but keeps the statistical meaning in the spatial domain. We derive a spatial threshold with a proper non-stationary component and determine optimal threshold values by minimizing an approximation error. This framework was implemented as a toolbox (WSPM) for the widely-used SPM2 software, taking advantage of the multiple options and functionality of SPM (Statistical Parametric Mapping) such as the specification of a linear model that may account for the hemodymanic response of the system. The sensitivity of our method is comparable to that of conventional SPM, which applies a spatial Gaussian prefilter to the data, even though our statistical assumptions are more conservative.

Statistical Parametric Mapping (SPM) is a widely deployed tool for detecting and analyzing brain activity from fMRI data. One of SPM's main features is smoothing the data by a Gaussian filter to increase the SNR. The subsequent statistical inference is based on the continuous Gaussian random field theory. Since the remaining spatial resolution has deteriorated due to smoothing, SPM introduces the concept of “resels” (resolution elements) or spatial information-containing cells. The number of resels turns out to be inversely proportional to the size of the Gaussian smoother.

Detection the activation signal in fMRI data can also be done by a wavelet approach: after computing the spatial wavelet transform, a straightforward coefficient-wise statistical test is applied to detect activated wavelet coefficients. In this paper, we establish the link between SPM and the wavelet approach based on two observations. First, the (iterated) lowpass analysis filter of the discrete wavelet transform can be chosen to closely resemble SPM's Gaussian filter. Second, the subsampling scheme provides us with a natural way to define the number of resels; i.e., the number of coefficients in the lowpass subband of the wavelet decomposition. Using this connection, we can obtain the degree of the splines of the wavelet transform that makes it equivalent to SPM's method. We show results for two particularly attractive biorthogonal wavelet transforms for this task; i.e., 3D fractional-spline wavelets and 2D+Z fractional quincunx wavelets. The activation patterns are comparable to SPM's.

Recently, we have proposed a novel family of bivariate, non-separable splines. These splines, called "hexsplines" have been designed to deal with hexagonally sampled data. Incorporating the shape of the Voronoi cell of a hexagonal lattice, they preserve the twelve-fold symmetry of the hexagon tiling cell. Similar to B-splines, we can use them to provide a link between the discrete and the continuous domain, which is required for many fundamental operations such as interpolation and resampling. The question we answer in this paper is "How well do the hex-splines approximate a given function in the continuous domain?" and more specifically "How do they compare to separable B-splines deployed on a lattice with the same sampling density?"

Hex-splines are a novel family of bivariate splines, which are well suited to handle hexagonally sampled data. Similar to classical 1D B-splines, the spline coefficients need to be computed by a prefilter. Unfortunately, the elegant implementation of this prefilter by causal and anti-causal recursive filtering is not applicable for the (non-separable) hex-splines. Therefore, in this paper we introduce a novel approach from the viewpoint of approximation theory. We propose three different recursive filters and optimize their parameters such that a desired order of approximation is obtained. The results for third and fourth order hex-splines are discussed. Although the proposed solutions provide only quasi-interpolation, they tend to be very close to the interpolation prefilter.

This paper proposes a new family of bivariate, non-separable splines, called hex-splines, especially designed for hexagonal lattices. The starting point of the construction is the indicator function of the Voronoi cell, which is used to define in a natural way the first-order hex-spline. Higher order hex-splines are obtained by successive convolutions. A mathematical analysis of this new bivariate spline family is presented. In particular, we derive a closed form for a hex-spline of arbitrary order. We also discuss important properties, such as their Fourier transform and the fact they form a Riesz basis. We also highlight the approximation order. For conventional rectangular lattices, hex-splines revert to classical separable tensor-product B-splines. Finally, some prototypical applications and experimental results demonstrate the usefulness of hex-splines for handling hexagonally sampled data.

We introduce a family of elementary singularities that are point-Hölder α-regular. These singularities are self-similar and are the Green functions of fractional derivative operators; i.e., by suitable fractional differentiation, one retrieves a Dirac δ function at the exact location of the singularity. We propose to use fractional operator-like wavelets that act as a multiscale version of the derivative in order to characterize and localize singularities in the wavelet domain. We show that the characteristic signature when the wavelet interacts with an elementary singularity has an asymptotic closed-form expression, termed the analytical footprint. Practically, this means that the dictionary of wavelet footprints is embodied in a single analytical form. We show that the wavelet coefficients of the (nonredundant) decomposition can be fitted in a multiscale fashion to retrieve the parameters of the underlying singularity. We propose an algorithm based on stepwise parametric fitting and the feasibility of the approach to recover singular signal representations.

Recently, we have introduced an integrated framework that combines wavelet-based processing with statistical testing in the spatial domain. In this paper, we propose two important enhancements of the framework. First, we revisit the underlying paradigm; i.e., that the effect of the wavelet processing can be considered as an adaptive denoising step to “improve” the parameter map, followed by a statistical detection procedure that takes into account the non-linear processing of the data. With an appropriate modification of the framework, we show that it is possible to reduce the bias of the method with respect to the best linear estimate, providing conservative results that are closer to the original data. Second, we propose an extension of our earlier technique that compensates for the lack of shift-invariance of the wavelet transform. We demonstrate experimentally that both enhancements have a positive effect on performance. In particular, we present a reproducibility study for multi-session data that compares WSPM against SPM with different amounts of smoothing. The full approach is available as a toolbox, named WSPM, for the SPM2 software; it takes advantage of multiple options and features of SPM such as the general linear model.

Introduction Wavelet-based statistical parametric mapping (WSPM) provides a framework that combines adaptive denoising in the wavelet domain with statistical testing in the spatial domain [1]. It guarantees strong type I error control and thus high confidence in the detections. WSPM is available as a toolbox for SPM2. Using multi-session experimental data, we estimate the empirical sensitivity, specificity, and bias of WSPM (various wavelet transforms) and SPM2 (various smoothing).

SPM compensates for multiple testing using the GRF theory, while WSPM and the voxel-by-voxel test use simple Bonferroni correction.

Reproducibility study: The consistency of the detection maps over the 4 sessions is assessed using a reproducibility study [2]: we estimate the empirical sensitivity and specificity from a binomial mixture model for the histogram of the cumulative detection maps. Additionally, we estimate the bias of the methods as the sum of the absolute differences between the contrast before thresholding and the one of the voxel-by-voxel approach.

Discussion In Figure 1, we show the activation maps of SPM at significance level 5% corrected. Increasing the smoothing increases the extent of the activation patterns and leads to merging of clusters. Notice that some foci are only detected for higher smoothing. In Figures 2-3, we show the activations maps of WSPM at the same significance level, but with various settings of the wavelet transform. The position and shape of the detected patterns are comparable to SPM, but the spatial definition appears to be higher. The small foci in the left auditory cortex (Fig 3, bottom) may correspond to the subdivisions of the primary auditory cortex or to different secondary auditory areas. We notice that the typical extent of the detected patterns does not depend a lot on these settings. The number of detected patterns seems to be best for the 3D wavelet transform (1 iteration).

In Figure 4, we show the ROC curves obtained after estimating the parameters of the binomial mixture model for both methods. By construction, the voxel-by-voxel statistical test has no bias, but reaches a very low sensitivity. For SPM, smoothing increases the bias (as can be expected) and the empirical sensitivity-specificity. Finally, WSPM has a lower bias than SPM 4mm combined with a comparable or better compromise between sensitivity and specificity. The wiggly curve behavior is due to the non-linear operation of thresholding in the wavelet domain.

Wavelet-based statistical parametric mapping (WSPM) analyzes fMRI data using a combination of powerful denoising in the wavelet domain with statistical testing in the spatial domain. It also guarantees strong type I error (false positives) control and thus high confidence in the detections. In this poster, we show the various stages of this framework and we propose a comparison of WSPM and SPM2, which is the de-facto standard for statistical analysis of fMRI data. WSPM is available to the neuro-imaging community as a toolbox for SPM.

One of the major advantages of WSPM is that is does not require to pre-smooth the data before statistical analysis, which is a prerequisite of the SPM approach. Therefore, potential high spatial resolution information available in the data is not lost and can be used to retrieve small and highly detailed activation patterns. As a typical result, we show the activation maps for SPM (6mm) and WSPM. The experimental paradigm was single-frequency acoustic stimulation (1.5T scanner; TR=1.2s; 1.8×1.8×3mm). For the same statistical significance (5% corrected), the activation patterns retrieved by WSPM are clearly more detailed than those by SPM2.

In the poster, we also include the results of the empirically measured sensivity and specificity using a reproducibility analysis for multi-session data using both WSPM and SPM2. From this evaluation, we see that with WSPM we are able to obtain high spatial resolution without loss of sensitivity.

Wavelet-based statistical parametric mapping (WSPM) combines powerful denoising in the wavelet domain with statistical testing in the spatial domain. It guarantees strong type I error control and thus high confidence in the detections. In this poster, we propose a comparison of WSPM and SPM2, based on the results of multi-session experimental data.

The dataset comes from a carefully conducted experiment with auditory stimulation (Philips Gyroscan 1.5T; TR = 1.2s; spatial resolution 1.8×1.8×3mm). The subject's head is placed within custom-designed headset, which isolates the person from most of the MR scanner's noise, and exposed to single-frequency acoustic stimulation during the activation condition of a block paradigm. Four different auditory frequencies (300Hz, 1126Hz, 2729Hz, 4690Hz) are used in each session. The complete experiment contains 4 sessions, each spanning 250 volumes.

The general linear model (GLM) setup was done using SPM, including regressors from the realignment procedure and the autoregressive model for serial correlations. For each session, functional maps of the combined contrast (all frequencies-rest) were obtained for a broad range of significance levels with SPM (4mm smoothing) and WSPM (orthogonal B-spline wavelets slice-by-slice; degree 1.0; 2 iterations; combination of 4 spatial shifts). SPM compensates for multiple testing using the Gaussian Random Field theory, while WSPM uses simple Bonferroni correction. The results were then analyzed using two different criteria:

Empirical: we report the number of activated voxels inside/outside a manually defined region of interest (ROI) around the auditory cortex, as shown in Fig. 1.

Theoretical: we analyze the detection maps of the 4 sessions using the non-parametric approach of Genovese (1997). This method allows us to estimate the sensitivity and specificity by using a binomial mixture model, and thus to obtain a receiver-operating-curve (ROC).

Figure 1.

In Fig. 2, we show for each session the number of detections for SPM and WSPM, inside and outside the ROI, as a function of the significance level. Notice that the significance level is at the volume level; i.e., α = 1.0 corresponds to the expectation of a single false positive for the whole volume. We found that both methods are about equally calibrated for α = 1.0. However, the slope of the curve that links the number of detections as a function of the significance level is lower for WSPM than for SPM, which means more detections with WSPM for the same type I error probability. Interestingly, number of detections outside the ROI is not higher for WSPM, suggesting that its higher sensitivity does not lead to false positives augmentation.

Figure 2.

In Fig. 3, we show the ROCs after estimating the binomial mixture model for both methods. The mixture parameter is globally estimated. The higher performance of WSPM is confirmed: higher sensitivity (more true positives) combined with higher specificity (more true negatives).

Figure 3.

In Fig. 4, we show the area under the ROC (i.e., sensitivity times specificity), which is a good single measure for the performance of the detection technique. This figure illustrates again the excellent balance of sensitivity and specificity for WSPM.

Figure 4.

Finally, we note that WSPM guarantees its strong type I error control using conservative assumptions only.

Consider classes of signals that have a finite number of degrees of freedom per unit of time and call this number the rate of innovation. Examples of signals with a finite rate of innovation include streams of Diracs (e.g., the Poisson process), nonuniform splines, and piecewise polynomials.

Even though these signals are not bandlimited, we showthat they can be sampled uniformly at (or above) the rate of innovation using an appropriate kernel and then be perfectly reconstructed. Thus, we prove sampling theorems for classes of signals and kernels that generalize the classic "bandlimited and sinc kernel" case. In particular, we show how to sample and reconstruct periodic and finite-length streams of Diracs, nonuniform splines, and piecewise polynomials using sinc and Gaussian kernels. For infinite-length signals with finite local rate of innovation, we show local sampling and reconstruction based on spline kernels.

The key in all constructions is to identify the innovative part of a signal (e.g., time instants and weights of Diracs) using an annihilating or locator filter: a device well known in spectral analysis and error-correction coding. This leads to standard computational procedures for solving the sampling problem, which we show through experimental results.

Applications of these new sampling results can be found in signal processing, communications systems, and biological systems.

We consider sampling discrete-time periodic signals which are piecewise bandlimited, that is, a signal that is the sum of a bandlimited signal with a piecewise polynomial signal containing a finite number of transitions. These signals are not bandlimited and thus the Shannon—also due to Kotelnikov, Whittaker—sampling theorem for bandlimited signals can not be applied. In this paper, we derive sampling and reconstruction schemes based on those developed in [1, 2, 3] for piecewise polynomial signals which take into account the extra degrees of freedom due to the bandlimitedness.

We consider the problem of sampling signals which are not bandlimited, but still have a finite number of degrees of freedom per unit of time, such as, for example, piecewise polynomials. We demonstrate that by using an adequate sampling kernel and a sampling rate greater or equal to the number of degrees of freedom per unit of time, one can uniquely reconstruct such signals. This proves a sampling theorem for a wide class of signals beyond bandlimited signals. Applications of this sampling theorem can be found in signal processing, communication systems and biological systems.

Signal acquisition and reconstruction is at the heart of signal processing and communications, and sampling theorems provide the bridge between the continuous and the discrete-time worlds. The most celebrated and widely used sampling theorem is often attributed to Shannon, and gives a sufficient condition, namely bandlimitedness, for an exact sampling and interpolation formula. Recently, this framework has been extended to classes of non-bandlimited signals. The way around Shannon classical sampling theorem resides in a parametric approach, where the prior that the signal is sparse in a basis or in a parametric space is put to contribution. This leads to new exact reconstruction formulas and fast algorithms that achieve such reconstructions.

The aim of this tutorial is to give an overview of these recent exciting findings in sampling theory. The fundamental theoretical results will be reviewed and constructive algorithms will be presented. Finally, a diverse set of applications will be presented so as to demonstrate the tangibility of the theoretical concepts.

The problem of reconstructing or estimating partially observed or sampled signals is an old and important one, and finds application in many areas of signal processing and communications. Traditional acquisition and reconstruction approaches are heavily influences by the classical Shannon sampling theory which gives an exact sampling and interpolation formula for bandlimited signals. Recently, the classical Shannon sampling framework has been extended to classes of non-bandlimited structured signals, which we call signals with Finite Rate of Innovation. In these new sampling schemes, the prior that the signal is sparse in a basis or in a parametric space is put to contribution and perfect reconstruction is possible based on a set of suitable measurements. This leads to new exact reconstruction formulas and fast algorithms that achieve such reconstructions.

The main aim of this tutorial is to give an overview of these new exciting findings in sampling theory. The fundamental theoretical results will be reviewed and constructive algorithms will be presented, both for 1-D and 2-D signals. We also discuss the effect of noise on the sampling and reconstruction of structured signals. Finally a diverse set of applications of these new concepts will be presented to emphasize the importance and far reaching implications of these new theories.

We present a generalization of the orthonormal Daubechies wavelets and of their related biorthogonal flavors (Cohen-Daubechies-Feauveau, 9⁄7). Our fundamental constraint is that the scaling functions should reproduce a predefined set of exponential polynomials. This allows one to tune the corresponding wavelet transform to a specific class of signals, thereby ensuring good approximation and sparsity properties. The main difference with the classical construction of Daubechies et al. is that the multiresolution spaces are derived from scale-dependent generating functions. However, from an algorithmic standpoint, Mallat's Fast Wavelet Transform algorithm can still be applied; the only adaptation consists in using scale-dependent filter banks. Finite support ensures the same computational efficiency as in the classical case. We characterize the scaling and wavelet filters, construct them and show several examples of the associated functions. We prove that these functions are square-integrable and that they converge to their classical counterparts of the corresponding order.

We propose a generalization of the Cohen-Daubechies-Feauveau (CDF) and 9⁄7 biorthogonal wavelet families. This is done within the framework of non-stationary multiresolution analysis, which involves a sequence of embedded approximation spaces generated by scaling functions that are not necessarily dilates of one another. We consider a dual pair of such multiresolutions, where the scaling functions at a given scale are mutually biorthogonal with respect to translation. Also, they must have the shortest-possible support while reproducing a given set of exponential polynomials. This constitutes a generalization of the standard polynomial reproduction property.

The corresponding refinement filters are derived from the ones that were studied by Dyn et al. in the framework of non-stationary subdivision schemes. By using different factorizations of these filters, we obtain a general family of compactly supported dual wavelet bases of L2. In particular, if the exponential parameters are all zero, one retrieves the standard CDF B-spline wavelets and the 9⁄7 wavelets. Our generalized description yields equivalent constructions for E-spline wavelets.

A fast filterbank implementation of the corresponding wavelet transform follows naturally; it is similar to Mallat's algorithm, except that the filters are now scale-dependent. This new scheme offers high flexibility and is tunable to the spectral characteristics of a wide class of signals. In particular, it is possible to obtain symmetric basis functions that are well-suited for image processing.

We present a generalization of the Daubechies wavelet family. The context is that of a non-stationary multiresolution analysis—i.e., a sequence of embedded approximation spaces generated by scaling functions that are not necessarily dilates of one another. The constraints that we impose on these scaling functions are: (1) orthogonality with respect to translation, (2) reproduction of a given set of exponential polynomials, and (3) minimal support. These design requirements lead to the construction of a general family of compactly-supported, orthonormal wavelet-like bases of L2. If the exponential parameters are all zero, then one recovers Daubechies wavelets, which are orthogonal to the polynomials of degree (N − 1) where N is the order (vanishing-moment property). A fast filterbank implementation of the generalized wavelet transform follows naturally; it is similar to Mallat's algorithm, except that the filters are now scale-dependent. The new transforms offer increased flexibility and are tunable to the spectral characteristics of a wide class of signals.

This paper presents a simple yet effective color filter array (CFA) interpolation algorithm. It is based on a linear interpolating kernel, but operates on YUV space, which results in a nontrivial boost on the peak signal-to-noise ratio (PSNR) of red and blue channels. The algorithm can be implemented efficiently. At the end of the paper, we present its performance compared with nonlinear interpolation methods and show that it's competitive even among state-of-the-art CFA demosaicing algorithms.

We design two complex filters {h[n],g[n])} for an orthogonal filter bank structure based on two atom functions {\rho_0(t),\rho_1/2(t)}, such that: 1) they generate an orthonormal multiwavelet basis; 2) the two complex conjugate wavelets are Hilbert wavelets, i.e., their frequency responses are supported either on positive or negative frequencies; and 3) the two scaling functions are real. The developed complex wavelet transform (CWT) is non-redundant, nearly shift-invariant, and distinguishable for diagonal features. The distinguishability in diagonal features is demonstrated by comparison with real discrete wavelet transform.

In this paper, a novel non-redundant complex wavelet transform (NRCWT) for real-valued signals is proposed. For this purpose, an orthogonal complex filter bank is developed to implement this NRCWT. We show how to choose the two complex filters from classical real-valued wavelet filters in such a way that the filterbank is always orthogonal. Using fractional B-spline filters, a pair of exact Hilbert wavelets are constructed, which can separate the positive frequencies from the negative frequencies.

In this paper, we investigate the problem of retrieving the innovation parameters (time and amplitude) of a stream of Diracs from non-uniform samples taken with a novel kernel (a hyperbolic secant). We devise a non-iterative, exact algorithm that allows perfect reconstruction of 2K innovations from as few as 2K non-uniform samples. We also investigate noise issues and compute the Cramér-Rao lower bounds for this problem. A simple total least-squares extension of the algorithm proves to be efficient in reconstructing the location of a single Dirac from noisy measurements.

We propose an unbiased estimate of a filtered version of the mean squared error--the blur-SURE (Stein's unbiased risk estimate)--as a novel criterion for estimating an unknown point spread function (PSF) from the degraded image only. The PSF is obtained by minimizing this new objective functional over a family of Wiener processings. Based on this estimated blur kernel, we then perform nonblind deconvolution using our recently developed algorithm. The SURE-based framework is exemplified with a number of parametric PSF, involving a scaling factor that controls the blur size. A typical example of such parametrization is the Gaussian kernel. The experimental results demonstrate that minimizing the blur-SURE yields highly accurate estimates of the PSF parameters, which also result in a restoration quality that is very similar to the one obtained with the exact PSF, when plugged into our recent multi-Wiener SURE-LET deconvolution algorithm. The highly competitive results obtained outline the great potential of developing more powerful blind deconvolution algorithms based on SURE-like estimates.

We propose a novel approach to estimate the parameters of motion blur (blur length and orientation) from an observed image. The estimation of the motion blur parameters is based on a novel criterion --- the minimization of an unbiased estimate of a filtered MSE ("blur-SURE"). By finding the best Wiener filter for this criterion, we automatically find the blur parameters with high accuracy. We then use these parameters in a recent (non-blind) deblurring algorithm that we have proposed and that achieves the state-of-the art in deconvolution. The results obtained are quite competitive with other standard algorithms under various range of scenarios: high noise level, short blur length, etc.

We propose a novel blind deconvolution method that consisting of firstly estimating the variance of the Gaussian blur, then performing non-blind deconvolution with the estimated PSF. The main contribution of this paper is the first step - to estimate the variance of the Gaussian blur, by minimizing a novel objective functional: an unbiased estimate of a blur MSE (SURE). The optimal parameter and blur variance are obtained by minimizing this criterion over linear processings that have the form of simple Wiener filterings. We then perform non-blind deconvolution using our recent high-quality SURE-based deconvolution algorithm. The very competitive results show the highly accurate estimation of the blur variance (compared to the ground-truth value) and the great potential of developing more powerful blind deconvolution algorithms based on the SURE-type principle.

In this paper, we propose a novel deconvolution algorithm based on the minimization of a regularized Stein's unbiased risk estimate (SURE), which is a good estimate of the mean squared error (MSE). We linearly parametrize the deconvolution process by using multiple Wiener filters as elementary functions, followed by undecimated Haar-wavelet thresholding. Due to the quadratic nature of SURE and the linear parametrization, the deconvolution problem finally boils down to solving a linear system of equations, which is very fast and exact. The linear coefficients, i.e., the solution of the linear system of equations, constitute the best approximation of the optimal processing on the Wiener-Haar-threshold basis that we consider. In addition, the proposed multi-Wiener SURE-LET approach is applicable for both periodic and symmetric boundary conditions, and can thus be used in various practical scenarios. The very competitive (both in computation time and quality) results show that the proposed algorithm, which can be interpreted as a kind of non-linear Wiener processing, can be used as a basic tool for building more sophisticated deconvolution algorithms.

We propose a novel deconvolution algorithm based on the minimization of Stein's unbiased risk estimate (SURE). We linearly parametrize the deconvolution process by using multiple Wiener filterings as elementary functions, followed by undecimated Haar-wavelet thresholding. The key contributions of our approach are: 1) the linear combination of several Wiener filters with different (but fixed) regularization parameters, which avoids the manual adjustment of a single nonlinear parameter; 2) the use of linear parameterization, which makes the SURE minimization finally boil down to solving a linear system of equations, leading to a very fast and exact optimization of the whole deconvolution process.
The results obtained on standard test images show that our algorithm favorably compares with the other state-of-the-art deconvolution methods in both speed and quality.

We propose an efficient method for image registration based on iteratively fitting a parametric model to the output of an elastic registration. It combines the flexibility of elastic registration - able to estimate complex deformations - with the robustness of parametric registration - able to estimate very large displacement. Our approach is made feasible by using the recent Local All-Pass (LAP) algorithm; a fast and accurate filter-based method for estimating the local deformation between two images. Moreover, at each iteration we fit a linear parametric model to the local deformation which is equivalent to solving a linear system of equations (very fast and efficient). We use a quadratic polynomial model however the framework can easily be extended to more complicated models. The significant advantage of the proposed method is its robustness to model mis-match (e.g. noise and blurring). Experimental results on synthetic images and real images demonstrate that the proposed algorithm is highly accurate and outperforms a selection of image registration approaches.

This paper presents a new approach to enhancing noisy (white Gaussian noise) speech signals for robust speech recognition. It is based on the minimization of an estimate of denoising MSE (known as SURE) and does not require any hypotheses on the original signal. The enhanced signal is obtained by thresholding coefficients in the DCT domain, with the parameters in the thresholding functions being specified through the minimization of the SURE. Thanks to a linear parametrization, this optimization is very cost-effective. This method also works well for non-white noise with a noise whitening processing before the optimization. We have performed automatic speech recognition tests on a subset of the AURORA 2 database, to compare our method with different denoising strategies. The results show that our method brings a substantial increase in recognition accuracy.