Just wondering: sFFT (based on DFT?) compromises frequency-domain in floating accuracy areas to approximate for the sake of performance?

"FFT" is just a name for one particularly clever method of computing DFT. "Sparse FFT" is a name for the method of computing DFT (or rather its approximation) that is applicable to the signals a priori known to have relatively few significant spectral components.

You can think of sparse FFT as the "noise gate" filter working in frequency domain.

The algorithm does not choose the noise threshold automagically - number of significant components to compute is a free parameter and its value must be decided upon beforehand.

QUOTE

Second inquiry: this rounding is inaccurate not reliable to rebuild original parameters? (possibly inaudible, maybe not)

In general, the magnitude of the accumulated roundoff error depends on the inherent precision of the used number format, on the order of computations, and on the transform window length. To get an idea of actual figures, one could take a look at the fftw accuracy benchmark results.

For basic theoretical analysis on the quantization noise in finite precision FFT, this chapter is worth reading. There are, of course, many other published works on this subject.

... Combining the QTTFFT algorithm with a method that computes QTT representationfrom several elements (samples) of a given vector, we compare it with Sparse Fouriertransform algorithms. By numerical examples we show that our approach can be competitivewith the existing methods for the Fourier-sparse signals with randomly distributedcomponents, especially for higher dimensions. Our approach is especially advantageousfor the signals with limited bandwidth, which are not exactly Fourier-sparse...

Know next to nothing about maths, but was wondering about the claim for improving battery life. Course, I understand computation uses power and any reduction in the computation needed for a given task reduces the power needed to perform the task.

I'm just wondering if any power savings are expected to be significant in typical consumer uses...phones, tablets etc or in more general scientific and engineering computation?

I'm just wondering if any power savings are expected to be significant in typical consumer uses...phones, tablets etc or in more general scientific and engineering computation?

Probably not. The FFT isn't really the bottleneck in anything consumers do that I can think of. Yes, video and audio codecs often use them, but they're usually only a small part of the entire codec time, and for these applications approximations to the FFT are already available. I didn't look at the math but my guess is that something like this becomes more useful for very large FFT sizes or for more then 1 or 2 dimensional transforms.

Science, engineering and maybe telcom applications might be a completely different story though.

I'm just wondering if any power savings are expected to be significant in typical consumer uses...phones, tablets etc or in more general scientific and engineering computation?

Probably not. The FFT isn't really the bottleneck in anything consumers do that I can think of. Yes, video and audio codecs often use them, but they're usually only a small part of the entire codec time, and for these applications approximations to the FFT are already available. I didn't look at the math but my guess is that something like this becomes more useful for very large FFT sizes or for more then 1 or 2 dimensional transforms.

Science, engineering and maybe telcom applications might be a completely different story though.

Well, on Rockbox at least the FFT part of decoding can take up to 50% of CPU time (http://www.rockbox.org/wiki/FasterMDCT). It's true that audio decoding is not really that much CPU intensive, but video encoding/decoding probably is another mater.

Authors suggest applications in image and vidio processing, but perhaps applications are broader and "limited" to problems using features of this TT (no dimensionality curse) format, including machine learning (PCA), pattern recognition, and the like (or wavelets, remote sensing, ...)

About dense formulae presented in the paper - from far above, it seems that at least one characteristic of this TT format is tied to decomposition - tensor SVD and TT-SVD algorithm. It is suggested/shown that decomposing allows fast and trivial arithmetics (..., convolution) in one dimension - linear, and fast approximationsYet QTT decomposition - binarization (algebraic wavelet transform) as additional feature

I remember, few years ago, experimenting with encoding (or decoding) audio to some codec, vorbis, mp3, I can't remember, with the use of external fft dll someone compiled and posted here on HA. It was faster.But I can't find the thread.