Let be vectors in , which we normalise to all have length at least . For any given radius , we consider the small ball probability

where are iid Bernoulli signs (i.e. they take values or independently with a probability of of each), and ranges over all (closed) balls of radius . The Littlewood-Offord problem is to compute the quantity

where range over all vectors in of length at least one. Informally, this number measures the extent to which a random walk of length (with all steps of size at least one) can concentrate into a ball of radius .

The one-dimensional case of this problem was answered by Erdös. First, one observes that one can normalise all the to be at least (as opposed to being at most ). In the model case when , he made the following simple observation: if a random sum fell into a ball of radius (which in the one-dimensional case, is an interval of length less than ), and one then changed one or more of the signs from to , then the new sum must necessarily lie outside of the ball. In other words, for any ball of radius , the set of signs for which forms an antichain. Applying Sperner’s theorem, the maximal size of this antichain is , and this soon leads to the exact value

when (the bound is attained in the extreme case ).

A similar argument works for higher values of , using Dilworth’s theorem instead of Sperner’s theorem, and gives the exact value

whenever and for some natural number , where are the largest binomial coefficients of .

Now consider the higher-dimensional problem. One has the obvious bound

but it is not obvious whether this inequality is strict. In other words, is there some way to exploit the additional freedom given by higher dimensions to make random walks concentrate more than in the one-dimensional case?

for some . Then one can consider the example in which is one unit vector, and is another unit vector orthogonal to . The small ball probability in this case can be computed to equal rather than , which is slightly larger.

In the positive direction, Frankl and Füredi established the asymptotic

as (holding and fixed). Furthermore, if was close to an integer, and more precisely if

(so that the above counterexample can be avoided) they showed that for sufficiently large (depending on ).

The factor was an artefact of their method, and they conjectured in fact that one should have for sufficiently large whenever

In this paper we verify the conjecture of Frankl and Füredi (and give a new proof of their asymptotic (1)). Our main tool is the following high-dimensional Littlewood-Offord inequality:

Theorem 1 Suppose that which is genuinely -dimensional in the sense that for any hyperplane going through the origin, one has for at least values of . Then one has

Theorem 1 can be viewed as a high-dimensional variant of Erdös’s inequality (but without the sharp upper bound). It is proven by the Fourier-analytic method of Halász. (This theorem was announced in my book with Van Vu several years ago, but we did not get around to publishing it until now.)

Using Theorem 1, one can verify the conjecture of Frankl and Füredi fairly quickly (the deduction takes a little over a page). The main point is that if there is excessive concentration, then Theorem 1 quickly places almost all of the vectors to lie very close to a line. If all the vectors are close to a line, then we can project onto this line and rescale, which causes to worsen a little bit in this reduction to the one-dimensional case, but it turns out that the bounds (2) allow us to tolerate this degradation of once (so it is fortunate that the cases were already done for us!). If instead we have a vector far from the line (as is the case in the key counterexample), then we manually eliminate that vector using the parallelogram law, which effectively drops below (half of the time, at least) if was initially less than , which gives enough of a saving to conclude the argument.

One moral that one can draw from this argument is that one can use a quasi-sharp estimate (such as Theorem 1), which ostensibly loses constant factors, to then deduce a sharp estimate (such as the Frankl-Furëdi conjecture) that loses no constant factors, as long as one is in an asymptotic regime (in this case, and large depending on ). The key is to exploit the fine structure in the main term (in this case, the piecewise constant nature of when passes over integers) to extract gains that can absorb the losses coming from the quasi-sharp estimate).

For commenters

To enter in LaTeX in comments, use $latex <Your LaTeX code>$ (without the < and > signs, of course; in fact, these signs should be avoided as they can cause formatting errors). See the about page for details and for other commenting policy.