In this paper of Lustig, he speaks about a something which appears unintuitive: sampling at random may exhibit better performance than sampling uniformly. I tried to understand this starting from page 15 of these slides, but I can't really make sense of anything.

Why, if we take random permutation of frequency coefficients, do we get a better reconstruction in terms of signal similarity? Why does this give better reconstruction, and what's the intuition behind the phenomenon?

$\begingroup$Not an expert at all in this field but, if the technique is based on CS, then reconstruction can be achieved with less samples than with uniform sampling, as a long as the data matrix is sparse. If you compare both systems at a given sampling rate, as you you need less samples with CS, then extra samples can be used to further increase performance.$\endgroup$
– vazApr 20 '16 at 13:41