Sunday, May 26, 2013

Sunday Morning Insight: The 200 Year Gap.

[ Please see updates at the end of this entry]

If one uses an algorithm as a way to find the sparsest solution to an underdetermined linear system of equations, then Yoram Bresler is right [1] when he points out that Gaspard de Prony might have been first in putting a stake in what we now call compressive sensing. What is so exceptional here is that the algorithm was found in 1795. If you consider the theoretical justification for some cases of compressive sensing were published in 2004-2006 (Donoho, Tao, Candes, Romberg), then it is probably one of the largest time gap between the use of an algorithm within an island of knowledge [6-8] and some theoretical justification for it. Here this affirmation is to be taken in wider sense: Is there an algorithm and theoretical justification that connects the sparsity of the solution to an underdetermined system of linear equations and the size of said system. In effect, Larry Wasserman thinks it might win the biggest gap award. What's interesting is that historically, it might even precede the Least Squares solution technique [7] depending on whether you are a Gauss (1795) or a Legendre (1805) fan.

The main drawbacks of Prony's method is that it is not numerically stable for large D. Moreover, it is not stable if x is nearly sparse. Also, the algorithm is very speci fic to the structure of the Fourier transform and does not generalize to other linear encodings

Indeed, we are now a little bit beyond m = 2s with the Donoho-Tanner phase transition [6] and we can deal with noise thanks to the 2004 results [2,3].

But just like this little music that won't stop in the back of one's mind, one just wonders: what if an algorithm had been found in 1795 that had been numerically stable ? Would it have changed the world earlier like the Least Squares did in 1800's and the FFT did in the 1960's ? [7]

While we are minding the gap, I will point out my dissertation work in which I proved (using Prony!) that one could reconstruct a plane polygon from a finite number of its Radon transform projections. At the time I referred to such objects as being "finitely parameterized" which in today's lingo would be "sparsely represented". Not long after, the notion of signals with finite rate of innovation came out of EPFL, followed later by compressed sensing etc. and the rest is history. So I am claiming my rightful place in history as having slightly shortened said gap. :-)

" I would like to add that the first appearance of Prony's method in compressed sensing I am aware of are talks given by Daniel Potts (TU Chemnitz) and Stefan Kunis (U Osnabrück) which were probably in 2009. There they explicitely compared the thresholds and recovery guarantees by Prony and Basis Pursuit. However, I could not find slides or preprints from these days (although they have more recent papers on that topic).﻿...They have a recent preprint "A sparse Prony FFT" here http://www.tu-chemnitz.de/mathematik/preprint/2013/PREPRINT.php?year=2013&num=03"

We describe the application of Prony-like reconstruction methods to the problem of the sparse Fast Fourier transform (sFFT) [5]. In particular, we adapt both important parts of the sFFT, quasi random sampling and ﬁltering techniques, to Prony-like methods

I note from the paper that the Sparse Prony FFT might fare better than some other sFFT implementations as it scales only as K^(3/2)..

1 comment:

Peyman Milanfar
said...

While we are minding the gap, I will point out my dissertation work in which I proved (using Prony!) that one could reconstruct a plane polygon from a finite number of its Radon transform projections. At the time I referred to such objects as being "finitely parameterized" which in today's lingo would be "sparsely represented". Not long after, the notion of signals with finite rate of innovation came out of EPFL, followed later by compressed sensing etc. and the rest is history. So I am claiming my rightful place in history as having slightly shortened said gap. :-)